(200; 2.650048ms)
[AfterEach] version v1
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:51:55.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5540" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":37,"skipped":636,"failed":0}
SSSSSSS
------------------------------
[sig-network] Proxy version v1
should proxy logs on node using proxy subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:51:55.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 08:51:55.588: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
(200; 19.698646ms)
[AfterEach] version v1
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:51:55.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-136" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":38,"skipped":643,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers
should be able to override the image's default command and arguments [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:51:55.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Mar 9 08:51:55.721: INFO: Waiting up to 5m0s for pod "client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c" in namespace "containers-7431" to be "success or failure"
Mar 9 08:51:55.737: INFO: Pod "client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.162247ms
Mar 9 08:51:57.741: INFO: Pod "client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019867769s
STEP: Saw pod success
Mar 9 08:51:57.741: INFO: Pod "client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c" satisfied condition "success or failure"
Mar 9 08:51:57.744: INFO: Trying to get logs from node jerma-worker2 pod client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c container test-container:
STEP: delete the pod
Mar 9 08:51:57.763: INFO: Waiting for pod client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c to disappear
Mar 9 08:51:57.767: INFO: Pod client-containers-834ee978-c00f-4976-bfcf-2eb623821f0c no longer exists
[AfterEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:51:57.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7431" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial]
validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:51:57.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar 9 08:51:57.847: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 9 08:51:57.859: INFO: Waiting for terminating namespaces to be deleted...
Mar 9 08:51:57.864: INFO:
Logging pods the kubelet thinks is on node jerma-worker before test
Mar 9 08:51:57.870: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded)
Mar 9 08:51:57.870: INFO: Container kube-proxy ready: true, restart count 0
Mar 9 08:51:57.870: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded)
Mar 9 08:51:57.870: INFO: Container kindnet-cni ready: true, restart count 0
Mar 9 08:51:57.871: INFO:
Logging pods the kubelet thinks is on node jerma-worker2 before test
Mar 9 08:51:57.875: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded)
Mar 9 08:51:57.875: INFO: Container kube-proxy ready: true, restart count 0
Mar 9 08:51:57.875: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded)
Mar 9 08:51:57.875: INFO: Container kindnet-cni ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-328891b8-5a44-47e5-bc35-e0acd108361b 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-328891b8-5a44-47e5-bc35-e0acd108361b off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-328891b8-5a44-47e5-bc35-e0acd108361b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:57:04.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5463" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
• [SLOW TEST:306.281 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":40,"skipped":688,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector
should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:57:04.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0309 08:57:10.169869 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 9 08:57:10.169: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:57:10.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7956" for this suite.
• [SLOW TEST:6.109 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":41,"skipped":704,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance]
should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:57:10.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar 9 08:57:10.237: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:57:14.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1120" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":42,"skipped":804,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota
should create a ResourceQuota and capture the life of a service. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:57:14.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:57:25.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5142" for this suite.
• [SLOW TEST:11.203 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should create a ResourceQuota and capture the life of a service. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":43,"skipped":807,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:57:25.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 9 08:57:25.847: INFO: Waiting up to 5m0s for pod "pod-2796e969-f82a-44e3-9fcc-c9078a53f94a" in namespace "emptydir-4410" to be "success or failure"
Mar 9 08:57:25.857: INFO: Pod "pod-2796e969-f82a-44e3-9fcc-c9078a53f94a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.866294ms
Mar 9 08:57:27.861: INFO: Pod "pod-2796e969-f82a-44e3-9fcc-c9078a53f94a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013892168s
STEP: Saw pod success
Mar 9 08:57:27.861: INFO: Pod "pod-2796e969-f82a-44e3-9fcc-c9078a53f94a" satisfied condition "success or failure"
Mar 9 08:57:27.864: INFO: Trying to get logs from node jerma-worker pod pod-2796e969-f82a-44e3-9fcc-c9078a53f94a container test-container:
STEP: delete the pod
Mar 9 08:57:27.901: INFO: Waiting for pod pod-2796e969-f82a-44e3-9fcc-c9078a53f94a to disappear
Mar 9 08:57:27.919: INFO: Pod pod-2796e969-f82a-44e3-9fcc-c9078a53f94a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:57:27.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4410" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":824,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace
should update a single-container pod's image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:57:27.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl replace
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897
[It] should update a single-container pod's image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 9 08:57:27.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5432'
Mar 9 08:57:28.120: INFO: stderr: ""
Mar 9 08:57:28.120: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Mar 9 08:57:33.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5432 -o json'
Mar 9 08:57:33.274: INFO: stderr: ""
Mar 9 08:57:33.274: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-09T08:57:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5432\",\n \"resourceVersion\": \"260050\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5432/pods/e2e-test-httpd-pod\",\n \"uid\": \"ab2acd42-2f61-47ca-93d7-adbcef10ada1\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zxzl7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zxzl7\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zxzl7\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-09T08:57:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-09T08:57:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-09T08:57:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-09T08:57:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b3ffdeb040777790e1b446594be301ff6aec38f73e4c734f26de68041b0e1297\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-09T08:57:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.227\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.227\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-09T08:57:28Z\"\n }\n}\n"
STEP: replace the image in the pod
Mar 9 08:57:33.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5432'
Mar 9 08:57:33.557: INFO: stderr: ""
Mar 9 08:57:33.557: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902
Mar 9 08:57:33.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5432'
Mar 9 08:57:46.091: INFO: stderr: ""
Mar 9 08:57:46.091: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:57:46.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5432" for this suite.
• [SLOW TEST:18.175 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl replace
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893
should update a single-container pod's image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":45,"skipped":831,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:57:46.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Mar 9 08:57:48.199: INFO: &Pod{ObjectMeta:{send-events-e8f04675-612d-4be3-949c-4eb4c0330c85 events-9454 /api/v1/namespaces/events-9454/pods/send-events-e8f04675-612d-4be3-949c-4eb4c0330c85 b6d4b647-e1d1-474d-927e-901447ebac2b 260141 0 2020-03-09 08:57:46 +0000 UTC map[name:foo time:153350709] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r7224,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r7224,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r7224,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:57:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:57:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:57:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 08:57:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.228,StartTime:2020-03-09 08:57:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 08:57:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://fec33c2a3aaf77437e2254130c9182fa738a3edce68519b0e38e3198edb78934,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: checking for scheduler event about the pod
Mar 9 08:57:50.204: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Mar 9 08:57:52.208: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:57:52.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9454" for this suite.
• [SLOW TEST:6.134 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":46,"skipped":844,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:57:52.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4061
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 9 08:57:52.321: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 9 08:58:10.477: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.237:8080/dial?request=hostname&protocol=udp&host=10.244.2.229&port=8081&tries=1'] Namespace:pod-network-test-4061 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 08:58:10.477: INFO: >>> kubeConfig: /root/.kube/config
I0309 08:58:10.509740 6 log.go:172] (0xc0016084d0) (0xc00221af00) Create stream
I0309 08:58:10.509777 6 log.go:172] (0xc0016084d0) (0xc00221af00) Stream added, broadcasting: 1
I0309 08:58:10.512698 6 log.go:172] (0xc0016084d0) Reply frame received for 1
I0309 08:58:10.512744 6 log.go:172] (0xc0016084d0) (0xc00221afa0) Create stream
I0309 08:58:10.512759 6 log.go:172] (0xc0016084d0) (0xc00221afa0) Stream added, broadcasting: 3
I0309 08:58:10.513745 6 log.go:172] (0xc0016084d0) Reply frame received for 3
I0309 08:58:10.513774 6 log.go:172] (0xc0016084d0) (0xc001e9ebe0) Create stream
I0309 08:58:10.513785 6 log.go:172] (0xc0016084d0) (0xc001e9ebe0) Stream added, broadcasting: 5
I0309 08:58:10.514571 6 log.go:172] (0xc0016084d0) Reply frame received for 5
I0309 08:58:10.586012 6 log.go:172] (0xc0016084d0) Data frame received for 3
I0309 08:58:10.586039 6 log.go:172] (0xc00221afa0) (3) Data frame handling
I0309 08:58:10.586057 6 log.go:172] (0xc00221afa0) (3) Data frame sent
I0309 08:58:10.586550 6 log.go:172] (0xc0016084d0) Data frame received for 3
I0309 08:58:10.586619 6 log.go:172] (0xc00221afa0) (3) Data frame handling
I0309 08:58:10.586701 6 log.go:172] (0xc0016084d0) Data frame received for 5
I0309 08:58:10.586715 6 log.go:172] (0xc001e9ebe0) (5) Data frame handling
I0309 08:58:10.588244 6 log.go:172] (0xc0016084d0) Data frame received for 1
I0309 08:58:10.588261 6 log.go:172] (0xc00221af00) (1) Data frame handling
I0309 08:58:10.588270 6 log.go:172] (0xc00221af00) (1) Data frame sent
I0309 08:58:10.588283 6 log.go:172] (0xc0016084d0) (0xc00221af00) Stream removed, broadcasting: 1
I0309 08:58:10.588295 6 log.go:172] (0xc0016084d0) Go away received
I0309 08:58:10.588521 6 log.go:172] (0xc0016084d0) (0xc00221af00) Stream removed, broadcasting: 1
I0309 08:58:10.588536 6 log.go:172] (0xc0016084d0) (0xc00221afa0) Stream removed, broadcasting: 3
I0309 08:58:10.588545 6 log.go:172] (0xc0016084d0) (0xc001e9ebe0) Stream removed, broadcasting: 5
Mar 9 08:58:10.588: INFO: Waiting for responses: map[]
Mar 9 08:58:10.591: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.237:8080/dial?request=hostname&protocol=udp&host=10.244.1.236&port=8081&tries=1'] Namespace:pod-network-test-4061 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 08:58:10.591: INFO: >>> kubeConfig: /root/.kube/config
I0309 08:58:10.617823 6 log.go:172] (0xc002c80420) (0xc001e9f040) Create stream
I0309 08:58:10.617840 6 log.go:172] (0xc002c80420) (0xc001e9f040) Stream added, broadcasting: 1
I0309 08:58:10.620574 6 log.go:172] (0xc002c80420) Reply frame received for 1
I0309 08:58:10.620606 6 log.go:172] (0xc002c80420) (0xc0023006e0) Create stream
I0309 08:58:10.620617 6 log.go:172] (0xc002c80420) (0xc0023006e0) Stream added, broadcasting: 3
I0309 08:58:10.621550 6 log.go:172] (0xc002c80420) Reply frame received for 3
I0309 08:58:10.621577 6 log.go:172] (0xc002c80420) (0xc0023008c0) Create stream
I0309 08:58:10.621588 6 log.go:172] (0xc002c80420) (0xc0023008c0) Stream added, broadcasting: 5
I0309 08:58:10.622626 6 log.go:172] (0xc002c80420) Reply frame received for 5
I0309 08:58:10.692146 6 log.go:172] (0xc002c80420) Data frame received for 3
I0309 08:58:10.692229 6 log.go:172] (0xc0023006e0) (3) Data frame handling
I0309 08:58:10.692320 6 log.go:172] (0xc0023006e0) (3) Data frame sent
I0309 08:58:10.692633 6 log.go:172] (0xc002c80420) Data frame received for 5
I0309 08:58:10.692670 6 log.go:172] (0xc0023008c0) (5) Data frame handling
I0309 08:58:10.692706 6 log.go:172] (0xc002c80420) Data frame received for 3
I0309 08:58:10.692726 6 log.go:172] (0xc0023006e0) (3) Data frame handling
I0309 08:58:10.694342 6 log.go:172] (0xc002c80420) Data frame received for 1
I0309 08:58:10.694379 6 log.go:172] (0xc001e9f040) (1) Data frame handling
I0309 08:58:10.694405 6 log.go:172] (0xc001e9f040) (1) Data frame sent
I0309 08:58:10.694430 6 log.go:172] (0xc002c80420) (0xc001e9f040) Stream removed, broadcasting: 1
I0309 08:58:10.694480 6 log.go:172] (0xc002c80420) Go away received
I0309 08:58:10.694515 6 log.go:172] (0xc002c80420) (0xc001e9f040) Stream removed, broadcasting: 1
I0309 08:58:10.694541 6 log.go:172] (0xc002c80420) (0xc0023006e0) Stream removed, broadcasting: 3
I0309 08:58:10.694554 6 log.go:172] (0xc002c80420) (0xc0023008c0) Stream removed, broadcasting: 5
Mar 9 08:58:10.694: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:58:10.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4061" for this suite.
• [SLOW TEST:18.464 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
Granular Checks: Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":858,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion
should allow substituting values in a container's args [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:58:10.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Mar 9 08:58:10.819: INFO: Waiting up to 5m0s for pod "var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344" in namespace "var-expansion-5571" to be "success or failure"
Mar 9 08:58:10.849: INFO: Pod "var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344": Phase="Pending", Reason="", readiness=false. Elapsed: 30.672173ms
Mar 9 08:58:12.853: INFO: Pod "var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034356947s
STEP: Saw pod success
Mar 9 08:58:12.853: INFO: Pod "var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344" satisfied condition "success or failure"
Mar 9 08:58:12.856: INFO: Trying to get logs from node jerma-worker pod var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344 container dapi-container:
STEP: delete the pod
Mar 9 08:58:12.920: INFO: Waiting for pod var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344 to disappear
Mar 9 08:58:12.925: INFO: Pod var-expansion-9d0569fc-7ecf-48bc-90f6-484e0349b344 no longer exists
[AfterEach] [k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:58:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5571" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":868,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API
should provide host IP as an env var [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:58:12.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar 9 08:58:12.981: INFO: Waiting up to 5m0s for pod "downward-api-0deb44e2-3303-4108-b152-95f640790583" in namespace "downward-api-8547" to be "success or failure"
Mar 9 08:58:12.985: INFO: Pod "downward-api-0deb44e2-3303-4108-b152-95f640790583": Phase="Pending", Reason="", readiness=false. Elapsed: 3.300634ms
Mar 9 08:58:14.988: INFO: Pod "downward-api-0deb44e2-3303-4108-b152-95f640790583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007101468s
STEP: Saw pod success
Mar 9 08:58:14.988: INFO: Pod "downward-api-0deb44e2-3303-4108-b152-95f640790583" satisfied condition "success or failure"
Mar 9 08:58:14.991: INFO: Trying to get logs from node jerma-worker pod downward-api-0deb44e2-3303-4108-b152-95f640790583 container dapi-container:
STEP: delete the pod
Mar 9 08:58:15.060: INFO: Waiting for pod downward-api-0deb44e2-3303-4108-b152-95f640790583 to disappear
Mar 9 08:58:15.080: INFO: Pod downward-api-0deb44e2-3303-4108-b152-95f640790583 no longer exists
[AfterEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:58:15.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8547" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":892,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS
should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:58:15.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6561.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6561.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6561.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6561.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6561.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6561.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 9 08:58:19.222: INFO: DNS probes using dns-6561/dns-test-dc57b1c0-4a57-491c-85e8-8720ca5257b0 succeeded
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 08:58:19.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6561" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":50,"skipped":912,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container
should have monotonically increasing restart count [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 08:58:19.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 in namespace container-probe-3798
Mar 9 08:58:23.401: INFO: Started pod liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 in namespace container-probe-3798
STEP: checking the pod's current state and verifying that restartCount is present
Mar 9 08:58:23.404: INFO: Initial restart count of pod liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is 0
Mar 9 08:58:39.437: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 1 (16.03297375s elapsed)
Mar 9 08:58:59.478: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 2 (36.073880861s elapsed)
Mar 9 08:59:19.520: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 3 (56.116112124s elapsed)
Mar 9 08:59:39.563: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 4 (1m16.158633582s elapsed)
Mar 9 09:00:53.737: INFO: Restart count of pod container-probe-3798/liveness-22ff31f5-e294-428a-8745-f6ae6780ae96 is now 5 (2m30.333103315s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:00:53.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3798" for this suite.
• [SLOW TEST:154.455 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
should have monotonically increasing restart count [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":920,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API
should provide pod UID as env vars [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:00:53.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar 9 09:00:53.918: INFO: Waiting up to 5m0s for pod "downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96" in namespace "downward-api-2845" to be "success or failure"
Mar 9 09:00:53.979: INFO: Pod "downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96": Phase="Pending", Reason="", readiness=false. Elapsed: 61.442904ms
Mar 9 09:00:55.984: INFO: Pod "downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066011704s
STEP: Saw pod success
Mar 9 09:00:55.984: INFO: Pod "downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96" satisfied condition "success or failure"
Mar 9 09:00:55.986: INFO: Trying to get logs from node jerma-worker pod downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96 container dapi-container:
STEP: delete the pod
Mar 9 09:00:56.018: INFO: Waiting for pod downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96 to disappear
Mar 9 09:00:56.027: INFO: Pod downward-api-2e74e100-d03d-4738-b6ee-f9e0593e4b96 no longer exists
[AfterEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:00:56.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2845" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container
with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:00:56.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:01:22.153: INFO: Container started at 2020-03-09 09:00:57 +0000 UTC, pod became ready at 2020-03-09 09:01:21 +0000 UTC
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:01:22.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5270" for this suite.
• [SLOW TEST:26.123 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":1066,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
should perform canary updates and phased rolling updates of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:01:22.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6729
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Mar 9 09:01:22.240: INFO: Found 0 stateful pods, waiting for 3
Mar 9 09:01:32.245: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 9 09:01:32.245: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 9 09:01:32.245: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Mar 9 09:01:32.273: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Mar 9 09:01:42.306: INFO: Updating stateful set ss2
Mar 9 09:01:42.338: INFO: Waiting for Pod statefulset-6729/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Mar 9 09:01:52.841: INFO: Found 2 stateful pods, waiting for 3
Mar 9 09:02:02.846: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 9 09:02:02.846: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 9 09:02:02.846: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Mar 9 09:02:02.869: INFO: Updating stateful set ss2
Mar 9 09:02:02.887: INFO: Waiting for Pod statefulset-6729/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 9 09:02:12.902: INFO: Waiting for Pod statefulset-6729/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 9 09:02:22.912: INFO: Updating stateful set ss2
Mar 9 09:02:22.950: INFO: Waiting for StatefulSet statefulset-6729/ss2 to complete update
Mar 9 09:02:22.950: INFO: Waiting for Pod statefulset-6729/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 9 09:02:32.977: INFO: Waiting for StatefulSet statefulset-6729/ss2 to complete update
Mar 9 09:02:32.977: INFO: Waiting for Pod statefulset-6729/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar 9 09:02:42.956: INFO: Deleting all statefulset in ns statefulset-6729
Mar 9 09:02:42.959: INFO: Scaling statefulset ss2 to 0
Mar 9 09:03:02.975: INFO: Waiting for statefulset status.replicas updated to 0
Mar 9 09:03:02.978: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:03:02.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6729" for this suite.
• [SLOW TEST:100.837 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
should perform canary updates and phased rolling updates of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":54,"skipped":1074,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS
should provide DNS for ExternalName services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:03:02.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 9 09:03:07.175: INFO: DNS probes using dns-test-fc354793-cd7c-455e-9d3d-79389a1638a0 succeeded
STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done
STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 9 09:03:11.302: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:11.305: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:11.305: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local]
Mar 9 09:03:16.308: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:16.310: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:16.310: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local]
Mar 9 09:03:21.309: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:21.311: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:21.311: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local]
Mar 9 09:03:26.308: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:26.310: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:26.310: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local]
Mar 9 09:03:31.308: INFO: File wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:31.312: INFO: File jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local from pod dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 9 09:03:31.312: INFO: Lookups using dns-3434/dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 failed for: [wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local]
Mar 9 09:03:36.311: INFO: DNS probes using dns-test-2b78ba3a-a2f8-4452-b834-6317dd1e1ce4 succeeded
STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3434.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3434.svc.cluster.local; sleep 1; done
STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 9 09:03:40.466: INFO: DNS probes using dns-test-e47a97f9-0d3d-4609-abc1-bc948d21c634 succeeded
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:03:40.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3434" for this suite.
• [SLOW TEST:37.592 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should provide DNS for ExternalName services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":55,"skipped":1117,"failed":0}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:03:40.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6565
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 9 09:03:40.680: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 9 09:03:58.802: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.242:8080/dial?request=hostname&protocol=http&host=10.244.2.241&port=8080&tries=1'] Namespace:pod-network-test-6565 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 09:03:58.802: INFO: >>> kubeConfig: /root/.kube/config
I0309 09:03:58.837896 6 log.go:172] (0xc002c802c0) (0xc001f15900) Create stream
I0309 09:03:58.837925 6 log.go:172] (0xc002c802c0) (0xc001f15900) Stream added, broadcasting: 1
I0309 09:03:58.839936 6 log.go:172] (0xc002c802c0) Reply frame received for 1
I0309 09:03:58.839974 6 log.go:172] (0xc002c802c0) (0xc001487860) Create stream
I0309 09:03:58.839987 6 log.go:172] (0xc002c802c0) (0xc001487860) Stream added, broadcasting: 3
I0309 09:03:58.840768 6 log.go:172] (0xc002c802c0) Reply frame received for 3
I0309 09:03:58.840803 6 log.go:172] (0xc002c802c0) (0xc001e9e3c0) Create stream
I0309 09:03:58.840816 6 log.go:172] (0xc002c802c0) (0xc001e9e3c0) Stream added, broadcasting: 5
I0309 09:03:58.841673 6 log.go:172] (0xc002c802c0) Reply frame received for 5
I0309 09:03:58.895811 6 log.go:172] (0xc002c802c0) Data frame received for 3
I0309 09:03:58.895831 6 log.go:172] (0xc001487860) (3) Data frame handling
I0309 09:03:58.895846 6 log.go:172] (0xc001487860) (3) Data frame sent
I0309 09:03:58.896312 6 log.go:172] (0xc002c802c0) Data frame received for 5
I0309 09:03:58.896337 6 log.go:172] (0xc001e9e3c0) (5) Data frame handling
I0309 09:03:58.896541 6 log.go:172] (0xc002c802c0) Data frame received for 3
I0309 09:03:58.896560 6 log.go:172] (0xc001487860) (3) Data frame handling
I0309 09:03:58.898059 6 log.go:172] (0xc002c802c0) Data frame received for 1
I0309 09:03:58.898085 6 log.go:172] (0xc001f15900) (1) Data frame handling
I0309 09:03:58.898106 6 log.go:172] (0xc001f15900) (1) Data frame sent
I0309 09:03:58.898330 6 log.go:172] (0xc002c802c0) (0xc001f15900) Stream removed, broadcasting: 1
I0309 09:03:58.898435 6 log.go:172] (0xc002c802c0) (0xc001f15900) Stream removed, broadcasting: 1
I0309 09:03:58.898453 6 log.go:172] (0xc002c802c0) (0xc001487860) Stream removed, broadcasting: 3
I0309 09:03:58.898475 6 log.go:172] (0xc002c802c0) Go away received
I0309 09:03:58.898504 6 log.go:172] (0xc002c802c0) (0xc001e9e3c0) Stream removed, broadcasting: 5
Mar 9 09:03:58.898: INFO: Waiting for responses: map[]
Mar 9 09:03:58.901: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.242:8080/dial?request=hostname&protocol=http&host=10.244.1.244&port=8080&tries=1'] Namespace:pod-network-test-6565 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 09:03:58.901: INFO: >>> kubeConfig: /root/.kube/config
I0309 09:03:58.930569 6 log.go:172] (0xc001608370) (0xc000fe2aa0) Create stream
I0309 09:03:58.930593 6 log.go:172] (0xc001608370) (0xc000fe2aa0) Stream added, broadcasting: 1
I0309 09:03:58.932517 6 log.go:172] (0xc001608370) Reply frame received for 1
I0309 09:03:58.932548 6 log.go:172] (0xc001608370) (0xc001f159a0) Create stream
I0309 09:03:58.932559 6 log.go:172] (0xc001608370) (0xc001f159a0) Stream added, broadcasting: 3
I0309 09:03:58.933368 6 log.go:172] (0xc001608370) Reply frame received for 3
I0309 09:03:58.933399 6 log.go:172] (0xc001608370) (0xc001f15a40) Create stream
I0309 09:03:58.933411 6 log.go:172] (0xc001608370) (0xc001f15a40) Stream added, broadcasting: 5
I0309 09:03:58.934250 6 log.go:172] (0xc001608370) Reply frame received for 5
I0309 09:03:59.022599 6 log.go:172] (0xc001608370) Data frame received for 3
I0309 09:03:59.022626 6 log.go:172] (0xc001f159a0) (3) Data frame handling
I0309 09:03:59.022642 6 log.go:172] (0xc001f159a0) (3) Data frame sent
I0309 09:03:59.023055 6 log.go:172] (0xc001608370) Data frame received for 3
I0309 09:03:59.023074 6 log.go:172] (0xc001f159a0) (3) Data frame handling
I0309 09:03:59.023312 6 log.go:172] (0xc001608370) Data frame received for 5
I0309 09:03:59.023326 6 log.go:172] (0xc001f15a40) (5) Data frame handling
I0309 09:03:59.024902 6 log.go:172] (0xc001608370) Data frame received for 1
I0309 09:03:59.024923 6 log.go:172] (0xc000fe2aa0) (1) Data frame handling
I0309 09:03:59.024933 6 log.go:172] (0xc000fe2aa0) (1) Data frame sent
I0309 09:03:59.024942 6 log.go:172] (0xc001608370) (0xc000fe2aa0) Stream removed, broadcasting: 1
I0309 09:03:59.024955 6 log.go:172] (0xc001608370) Go away received
I0309 09:03:59.025140 6 log.go:172] (0xc001608370) (0xc000fe2aa0) Stream removed, broadcasting: 1
I0309 09:03:59.025171 6 log.go:172] (0xc001608370) (0xc001f159a0) Stream removed, broadcasting: 3
I0309 09:03:59.025185 6 log.go:172] (0xc001608370) (0xc001f15a40) Stream removed, broadcasting: 5
Mar 9 09:03:59.025: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:03:59.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6565" for this suite.
• [SLOW TEST:18.442 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
Granular Checks: Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1122,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition
getting/updating/patching custom resource definition status sub-resource works [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:03:59.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:03:59.098: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:03:59.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4334" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":57,"skipped":1157,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
works for multiple CRDs of same group but different versions [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:03:59.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Mar 9 09:03:59.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Mar 9 09:04:10.446: INFO: >>> kubeConfig: /root/.kube/config
Mar 9 09:04:13.393: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:04:22.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8376" for this suite.
• [SLOW TEST:23.097 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
works for multiple CRDs of same group but different versions [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":58,"skipped":1172,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume
should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:04:22.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 9 09:04:22.846: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15" in namespace "downward-api-8670" to be "success or failure"
Mar 9 09:04:22.851: INFO: Pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116587ms
Mar 9 09:04:25.200: INFO: Pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353371368s
Mar 9 09:04:27.204: INFO: Pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.357162553s
STEP: Saw pod success
Mar 9 09:04:27.204: INFO: Pod "downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15" satisfied condition "success or failure"
Mar 9 09:04:27.206: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15 container client-container:
STEP: delete the pod
Mar 9 09:04:27.319: INFO: Waiting for pod downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15 to disappear
Mar 9 09:04:27.324: INFO: Pod downwardapi-volume-cf55ef0c-b77c-4af7-9ffc-dd93d7650b15 no longer exists
[AfterEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:04:27.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8670" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1187,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
should mutate custom resource with pruning [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:04:27.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 9 09:04:28.172: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 9 09:04:31.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:04:31.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6315-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:04:32.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9555" for this suite.
STEP: Destroying namespace "webhook-9555-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
• [SLOW TEST:5.229 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should mutate custom resource with pruning [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":60,"skipped":1202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container
should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:04:32.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-15ea68aa-89c8-4e36-a616-8cd86ef88787 in namespace container-probe-1670
Mar 9 09:04:34.615: INFO: Started pod test-webserver-15ea68aa-89c8-4e36-a616-8cd86ef88787 in namespace container-probe-1670
STEP: checking the pod's current state and verifying that restartCount is present
Mar 9 09:04:34.621: INFO: Initial restart count of pod test-webserver-15ea68aa-89c8-4e36-a616-8cd86ef88787 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:08:35.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1670" for this suite.
• [SLOW TEST:242.727 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1229,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook
should execute prestop http hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:08:35.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Mar 9 09:08:39.396: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 9 09:08:39.407: INFO: Pod pod-with-prestop-http-hook still exists
Mar 9 09:08:41.408: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 9 09:08:41.411: INFO: Pod pod-with-prestop-http-hook still exists
Mar 9 09:08:43.408: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 9 09:08:43.412: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:08:43.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2199" for this suite.
• [SLOW TEST:8.153 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
when create a pod with lifecycle hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
should execute prestop http hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1231,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook
should execute poststart exec hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:08:43.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Mar 9 09:08:51.575: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 9 09:08:51.584: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 9 09:08:53.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 9 09:08:53.588: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 9 09:08:55.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 9 09:08:55.588: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 9 09:08:57.584: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 9 09:08:57.589: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:08:57.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-515" for this suite.
• [SLOW TEST:14.185 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
when create a pod with lifecycle hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
should execute poststart exec hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1237,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
pod should support shared volumes between containers [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:08:57.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Mar 9 09:09:01.716: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9005 PodName:pod-sharedvolume-7fa73bcc-fc1f-4225-bec5-450e5cc8936c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 09:09:01.716: INFO: >>> kubeConfig: /root/.kube/config
I0309 09:09:01.758714 6 log.go:172] (0xc0027bbd90) (0xc001e9e820) Create stream
I0309 09:09:01.758751 6 log.go:172] (0xc0027bbd90) (0xc001e9e820) Stream added, broadcasting: 1
I0309 09:09:01.761074 6 log.go:172] (0xc0027bbd90) Reply frame received for 1
I0309 09:09:01.761118 6 log.go:172] (0xc0027bbd90) (0xc001e9e8c0) Create stream
I0309 09:09:01.761133 6 log.go:172] (0xc0027bbd90) (0xc001e9e8c0) Stream added, broadcasting: 3
I0309 09:09:01.762135 6 log.go:172] (0xc0027bbd90) Reply frame received for 3
I0309 09:09:01.762173 6 log.go:172] (0xc0027bbd90) (0xc001e9e960) Create stream
I0309 09:09:01.762187 6 log.go:172] (0xc0027bbd90) (0xc001e9e960) Stream added, broadcasting: 5
I0309 09:09:01.763199 6 log.go:172] (0xc0027bbd90) Reply frame received for 5
I0309 09:09:01.831003 6 log.go:172] (0xc0027bbd90) Data frame received for 5
I0309 09:09:01.831040 6 log.go:172] (0xc001e9e960) (5) Data frame handling
I0309 09:09:01.831063 6 log.go:172] (0xc0027bbd90) Data frame received for 3
I0309 09:09:01.831077 6 log.go:172] (0xc001e9e8c0) (3) Data frame handling
I0309 09:09:01.831092 6 log.go:172] (0xc001e9e8c0) (3) Data frame sent
I0309 09:09:01.831106 6 log.go:172] (0xc0027bbd90) Data frame received for 3
I0309 09:09:01.831124 6 log.go:172] (0xc001e9e8c0) (3) Data frame handling
I0309 09:09:01.832280 6 log.go:172] (0xc0027bbd90) Data frame received for 1
I0309 09:09:01.832304 6 log.go:172] (0xc001e9e820) (1) Data frame handling
I0309 09:09:01.832325 6 log.go:172] (0xc001e9e820) (1) Data frame sent
I0309 09:09:01.832343 6 log.go:172] (0xc0027bbd90) (0xc001e9e820) Stream removed, broadcasting: 1
I0309 09:09:01.832417 6 log.go:172] (0xc0027bbd90) (0xc001e9e820) Stream removed, broadcasting: 1
I0309 09:09:01.832436 6 log.go:172] (0xc0027bbd90) (0xc001e9e8c0) Stream removed, broadcasting: 3
I0309 09:09:01.832457 6 log.go:172] (0xc0027bbd90) Go away received
I0309 09:09:01.832498 6 log.go:172] (0xc0027bbd90) (0xc001e9e960) Stream removed, broadcasting: 5
Mar 9 09:09:01.832: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:09:01.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9005" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":64,"skipped":1258,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota
should verify ResourceQuota with best effort scope. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:09:01.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:09:18.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3747" for this suite.
• [SLOW TEST:16.262 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should verify ResourceQuota with best effort scope. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":65,"skipped":1269,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap
updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:09:18.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-ad45d318-bb8d-4497-9d42-4a46ec176c69
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-ad45d318-bb8d-4497-9d42-4a46ec176c69
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:09:22.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-664" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1275,"failed":0}
------------------------------
[sig-network] DNS
should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:09:22.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9389 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9389;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9389 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9389;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9389.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9389.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9389.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9389.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9389.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9389.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 60.145.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.145.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.145.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.145.60_tcp@PTR;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9389 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9389;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9389 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9389;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9389.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9389.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9389.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9389.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9389.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9389.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9389.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9389.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 60.145.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.145.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.145.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.145.60_tcp@PTR;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 9 09:09:26.445: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.448: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.451: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.454: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.457: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.460: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.463: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.465: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.483: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.486: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.488: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.490: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.493: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.498: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.500: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:26.514: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc]
Mar 9 09:09:31.518: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.521: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.523: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.525: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.528: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.535: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.551: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.553: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.555: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.557: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.560: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.562: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.564: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.566: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:31.584: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc]
Mar 9 09:09:36.522: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.525: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.529: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.536: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.538: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.540: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.543: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.564: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.567: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.570: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.573: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.576: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.579: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:36.599: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc]
Mar 9 09:09:41.518: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.522: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.525: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.532: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.536: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.539: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.542: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.564: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.567: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.570: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.573: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.577: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.580: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.583: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:41.603: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc]
Mar 9 09:09:46.518: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.522: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.524: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.527: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.529: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.535: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.539: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.559: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.562: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.564: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.573: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.591: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.597: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.600: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:46.615: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc]
Mar 9 09:09:51.518: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.521: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.523: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.526: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.528: INFO: Unable to read wheezy_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.530: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.535: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.551: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.553: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.555: INFO: Unable to read jessie_udp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.557: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389 from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.559: INFO: Unable to read jessie_udp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.562: INFO: Unable to read jessie_tcp@dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.564: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.566: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc from pod dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76: the server could not find the requested resource (get pods dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76)
Mar 9 09:09:51.579: INFO: Lookups using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9389 wheezy_tcp@dns-test-service.dns-9389 wheezy_udp@dns-test-service.dns-9389.svc wheezy_tcp@dns-test-service.dns-9389.svc wheezy_udp@_http._tcp.dns-test-service.dns-9389.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9389.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9389 jessie_tcp@dns-test-service.dns-9389 jessie_udp@dns-test-service.dns-9389.svc jessie_tcp@dns-test-service.dns-9389.svc jessie_udp@_http._tcp.dns-test-service.dns-9389.svc jessie_tcp@_http._tcp.dns-test-service.dns-9389.svc]
Mar 9 09:09:56.591: INFO: DNS probes using dns-9389/dns-test-037f39ff-2dcb-4c64-a50f-9b67b0b39d76 succeeded
STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:09:56.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9389" for this suite.
• [SLOW TEST:34.592 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":67,"skipped":1275,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period
should be submitted and removed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:09:56.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Mar 9 09:10:00.926: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Mar 9 09:10:06.049: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:10:06.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6497" for this suite.
• [SLOW TEST:9.234 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
[k8s.io] Delete Grace Period
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
should be submitted and removed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":68,"skipped":1298,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector
should orphan pods created by rc if delete options say so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:10:06.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0309 09:10:46.207142 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 9 09:10:46.207: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:10:46.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7414" for this suite.
• [SLOW TEST:40.154 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should orphan pods created by rc if delete options say so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":69,"skipped":1333,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
should be able to deny pod and configmap creation [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:10:46.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 9 09:10:46.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 9 09:10:49.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:10:59.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-54" for this suite.
STEP: Destroying namespace "webhook-54-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
• [SLOW TEST:13.870 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should be able to deny pod and configmap creation [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":70,"skipped":1335,"failed":0}
[sig-storage] ConfigMap
should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:11:00.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-d91fd1e0-4b8d-4cff-9f9f-ebe095a5a2f1
STEP: Creating a pod to test consume configMaps
Mar 9 09:11:00.163: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf" in namespace "configmap-3070" to be "success or failure"
Mar 9 09:11:00.167: INFO: Pod "pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227059ms
Mar 9 09:11:02.171: INFO: Pod "pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007928477s
STEP: Saw pod success
Mar 9 09:11:02.171: INFO: Pod "pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf" satisfied condition "success or failure"
Mar 9 09:11:02.174: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf container configmap-volume-test:
STEP: delete the pod
Mar 9 09:11:02.187: INFO: Waiting for pod pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf to disappear
Mar 9 09:11:02.191: INFO: Pod pod-configmaps-6f6e20c2-786d-4957-9342-ad5e58180ebf no longer exists
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:11:02.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3070" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1335,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:11:02.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 9 09:11:02.283: INFO: Waiting up to 5m0s for pod "pod-a9272fe7-460a-4ab8-9208-37e84df355f4" in namespace "emptydir-6225" to be "success or failure"
Mar 9 09:11:02.308: INFO: Pod "pod-a9272fe7-460a-4ab8-9208-37e84df355f4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.43163ms
Mar 9 09:11:04.312: INFO: Pod "pod-a9272fe7-460a-4ab8-9208-37e84df355f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028573689s
STEP: Saw pod success
Mar 9 09:11:04.312: INFO: Pod "pod-a9272fe7-460a-4ab8-9208-37e84df355f4" satisfied condition "success or failure"
Mar 9 09:11:04.315: INFO: Trying to get logs from node jerma-worker pod pod-a9272fe7-460a-4ab8-9208-37e84df355f4 container test-container:
STEP: delete the pod
Mar 9 09:11:04.374: INFO: Waiting for pod pod-a9272fe7-460a-4ab8-9208-37e84df355f4 to disappear
Mar 9 09:11:04.382: INFO: Pod pod-a9272fe7-460a-4ab8-9208-37e84df355f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:11:04.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6225" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1339,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
should mutate custom resource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:11:04.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 9 09:11:05.100: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 9 09:11:07.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719341865, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719341865, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719341865, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719341865, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 9 09:11:10.147: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:11:10.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9858-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:11:11.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9207" for this suite.
STEP: Destroying namespace "webhook-9207-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
• [SLOW TEST:7.031 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should mutate custom resource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":73,"skipped":1358,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap
should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:11:11.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-48c7e354-22d3-4291-9df7-bfe964336639
STEP: Creating a pod to test consume configMaps
Mar 9 09:11:11.558: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb" in namespace "projected-8288" to be "success or failure"
Mar 9 09:11:11.564: INFO: Pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.775622ms
Mar 9 09:11:13.593: INFO: Pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035401998s
Mar 9 09:11:15.612: INFO: Pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054369159s
STEP: Saw pod success
Mar 9 09:11:15.612: INFO: Pod "pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb" satisfied condition "success or failure"
Mar 9 09:11:15.615: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb container projected-configmap-volume-test:
STEP: delete the pod
Mar 9 09:11:15.750: INFO: Waiting for pod pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb to disappear
Mar 9 09:11:15.752: INFO: Pod pod-projected-configmaps-50c36888-ece5-4f75-995a-4fde463b27eb no longer exists
[AfterEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:11:15.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8288" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1381,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
should mutate configmap [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:11:15.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 9 09:11:16.466: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 9 09:11:19.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:11:19.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8448" for this suite.
STEP: Destroying namespace "webhook-8448-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":75,"skipped":1395,"failed":0}
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance]
should invoke init containers on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:11:19.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar 9 09:11:19.755: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:11:24.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-822" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":76,"skipped":1401,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:11:24.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5229
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-5229
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5229
Mar 9 09:11:24.348: INFO: Found 0 stateful pods, waiting for 1
Mar 9 09:11:34.352: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Mar 9 09:11:34.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 9 09:11:36.365: INFO: stderr: "I0309 09:11:36.250225 245 log.go:172] (0xc0000f4f20) (0xc0005c9e00) Create stream\nI0309 09:11:36.250257 245 log.go:172] (0xc0000f4f20) (0xc0005c9e00) Stream added, broadcasting: 1\nI0309 09:11:36.252524 245 log.go:172] (0xc0000f4f20) Reply frame received for 1\nI0309 09:11:36.252559 245 log.go:172] (0xc0000f4f20) (0xc00056e640) Create stream\nI0309 09:11:36.252568 245 log.go:172] (0xc0000f4f20) (0xc00056e640) Stream added, broadcasting: 3\nI0309 09:11:36.253379 245 log.go:172] (0xc0000f4f20) Reply frame received for 3\nI0309 09:11:36.253410 245 log.go:172] (0xc0000f4f20) (0xc00051a6e0) Create stream\nI0309 09:11:36.253420 245 log.go:172] (0xc0000f4f20) (0xc00051a6e0) Stream added, broadcasting: 5\nI0309 09:11:36.254328 245 log.go:172] (0xc0000f4f20) Reply frame received for 5\nI0309 09:11:36.329570 245 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0309 09:11:36.329594 245 log.go:172] (0xc00051a6e0) (5) Data frame handling\nI0309 09:11:36.329606 245 log.go:172] (0xc00051a6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:11:36.359439 245 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0309 09:11:36.359463 245 log.go:172] (0xc00056e640) (3) Data frame handling\nI0309 09:11:36.359489 245 log.go:172] (0xc00056e640) (3) Data frame sent\nI0309 09:11:36.359864 245 log.go:172] (0xc0000f4f20) Data frame received for 3\nI0309 09:11:36.359884 245 log.go:172] (0xc00056e640) (3) Data frame handling\nI0309 09:11:36.359915 245 log.go:172] (0xc0000f4f20) Data frame received for 5\nI0309 09:11:36.359941 245 log.go:172] (0xc00051a6e0) (5) Data frame handling\nI0309 09:11:36.361755 245 log.go:172] (0xc0000f4f20) Data frame received for 1\nI0309 09:11:36.361772 245 log.go:172] (0xc0005c9e00) (1) Data frame handling\nI0309 09:11:36.361788 245 log.go:172] (0xc0005c9e00) (1) Data frame sent\nI0309 09:11:36.361799 245 log.go:172] (0xc0000f4f20) (0xc0005c9e00) Stream removed, broadcasting: 1\nI0309 09:11:36.361817 245 log.go:172] (0xc0000f4f20) Go away received\nI0309 09:11:36.362356 245 log.go:172] (0xc0000f4f20) (0xc0005c9e00) Stream removed, broadcasting: 1\nI0309 09:11:36.362374 245 log.go:172] (0xc0000f4f20) (0xc00056e640) Stream removed, broadcasting: 3\nI0309 09:11:36.362382 245 log.go:172] (0xc0000f4f20) (0xc00051a6e0) Stream removed, broadcasting: 5\n"
Mar 9 09:11:36.365: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 9 09:11:36.365: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 9 09:11:36.369: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar 9 09:11:46.373: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 9 09:11:46.373: INFO: Waiting for statefulset status.replicas updated to 0
Mar 9 09:11:46.385: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999358s
Mar 9 09:11:47.389: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995465231s
Mar 9 09:11:48.409: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991460979s
Mar 9 09:11:49.413: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971293996s
Mar 9 09:11:50.416: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.9675221s
Mar 9 09:11:51.423: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.964529063s
Mar 9 09:11:52.427: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.956989582s
Mar 9 09:11:53.432: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.953182588s
Mar 9 09:11:54.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.948665447s
Mar 9 09:11:55.439: INFO: Verifying statefulset ss doesn't scale past 1 for another 945.097553ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5229
Mar 9 09:11:56.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 9 09:11:56.651: INFO: stderr: "I0309 09:11:56.589933 271 log.go:172] (0xc000505130) (0xc0008120a0) Create stream\nI0309 09:11:56.589990 271 log.go:172] (0xc000505130) (0xc0008120a0) Stream added, broadcasting: 1\nI0309 09:11:56.592579 271 log.go:172] (0xc000505130) Reply frame received for 1\nI0309 09:11:56.592610 271 log.go:172] (0xc000505130) (0xc00056fae0) Create stream\nI0309 09:11:56.592623 271 log.go:172] (0xc000505130) (0xc00056fae0) Stream added, broadcasting: 3\nI0309 09:11:56.593541 271 log.go:172] (0xc000505130) Reply frame received for 3\nI0309 09:11:56.593562 271 log.go:172] (0xc000505130) (0xc000812140) Create stream\nI0309 09:11:56.593570 271 log.go:172] (0xc000505130) (0xc000812140) Stream added, broadcasting: 5\nI0309 09:11:56.594637 271 log.go:172] (0xc000505130) Reply frame received for 5\nI0309 09:11:56.646084 271 log.go:172] (0xc000505130) Data frame received for 5\nI0309 09:11:56.646154 271 log.go:172] (0xc000812140) (5) Data frame handling\nI0309 09:11:56.646167 271 log.go:172] (0xc000812140) (5) Data frame sent\nI0309 09:11:56.646175 271 log.go:172] (0xc000505130) Data frame received for 5\nI0309 09:11:56.646181 271 log.go:172] (0xc000812140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:11:56.646197 271 log.go:172] (0xc000505130) Data frame received for 3\nI0309 09:11:56.646204 271 log.go:172] (0xc00056fae0) (3) Data frame handling\nI0309 09:11:56.646211 271 log.go:172] (0xc00056fae0) (3) Data frame sent\nI0309 09:11:56.646218 271 log.go:172] (0xc000505130) Data frame received for 3\nI0309 09:11:56.646225 271 log.go:172] (0xc00056fae0) (3) Data frame handling\nI0309 09:11:56.647322 271 log.go:172] (0xc000505130) Data frame received for 1\nI0309 09:11:56.647336 271 log.go:172] (0xc0008120a0) (1) Data frame handling\nI0309 09:11:56.647342 271 log.go:172] (0xc0008120a0) (1) Data frame sent\nI0309 09:11:56.647362 271 log.go:172] (0xc000505130) (0xc0008120a0) Stream removed, broadcasting: 1\nI0309 09:11:56.647378 271 log.go:172] (0xc000505130) Go away received\nI0309 09:11:56.647701 271 log.go:172] (0xc000505130) (0xc0008120a0) Stream removed, broadcasting: 1\nI0309 09:11:56.647718 271 log.go:172] (0xc000505130) (0xc00056fae0) Stream removed, broadcasting: 3\nI0309 09:11:56.647726 271 log.go:172] (0xc000505130) (0xc000812140) Stream removed, broadcasting: 5\n"
Mar 9 09:11:56.651: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 9 09:11:56.651: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 9 09:11:56.655: INFO: Found 1 stateful pods, waiting for 3
Mar 9 09:12:06.659: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 9 09:12:06.659: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 9 09:12:06.659: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Mar 9 09:12:06.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 9 09:12:06.882: INFO: stderr: "I0309 09:12:06.807084 292 log.go:172] (0xc00052a840) (0xc00068bcc0) Create stream\nI0309 09:12:06.807124 292 log.go:172] (0xc00052a840) (0xc00068bcc0) Stream added, broadcasting: 1\nI0309 09:12:06.812477 292 log.go:172] (0xc00052a840) Reply frame received for 1\nI0309 09:12:06.812516 292 log.go:172] (0xc00052a840) (0xc00041f400) Create stream\nI0309 09:12:06.812528 292 log.go:172] (0xc00052a840) (0xc00041f400) Stream added, broadcasting: 3\nI0309 09:12:06.814902 292 log.go:172] (0xc00052a840) Reply frame received for 3\nI0309 09:12:06.814925 292 log.go:172] (0xc00052a840) (0xc000510000) Create stream\nI0309 09:12:06.814932 292 log.go:172] (0xc00052a840) (0xc000510000) Stream added, broadcasting: 5\nI0309 09:12:06.815755 292 log.go:172] (0xc00052a840) Reply frame received for 5\nI0309 09:12:06.877113 292 log.go:172] (0xc00052a840) Data frame received for 5\nI0309 09:12:06.877138 292 log.go:172] (0xc000510000) (5) Data frame handling\nI0309 09:12:06.877148 292 log.go:172] (0xc000510000) (5) Data frame sent\nI0309 09:12:06.877159 292 log.go:172] (0xc00052a840) Data frame received for 5\nI0309 09:12:06.877165 292 log.go:172] (0xc000510000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:12:06.877185 292 log.go:172] (0xc00052a840) Data frame received for 3\nI0309 09:12:06.877191 292 log.go:172] (0xc00041f400) (3) Data frame handling\nI0309 09:12:06.877197 292 log.go:172] (0xc00041f400) (3) Data frame sent\nI0309 09:12:06.877203 292 log.go:172] (0xc00052a840) Data frame received for 3\nI0309 09:12:06.877208 292 log.go:172] (0xc00041f400) (3) Data frame handling\nI0309 09:12:06.878312 292 log.go:172] (0xc00052a840) Data frame received for 1\nI0309 09:12:06.878340 292 log.go:172] (0xc00068bcc0) (1) Data frame handling\nI0309 09:12:06.878356 292 log.go:172] (0xc00068bcc0) (1) Data frame sent\nI0309 09:12:06.878498 292 log.go:172] (0xc00052a840) (0xc00068bcc0) Stream removed, broadcasting: 1\nI0309 09:12:06.878519 292 log.go:172] (0xc00052a840) Go away received\nI0309 09:12:06.878995 292 log.go:172] (0xc00052a840) (0xc00068bcc0) Stream removed, broadcasting: 1\nI0309 09:12:06.879017 292 log.go:172] (0xc00052a840) (0xc00041f400) Stream removed, broadcasting: 3\nI0309 09:12:06.879025 292 log.go:172] (0xc00052a840) (0xc000510000) Stream removed, broadcasting: 5\n"
Mar 9 09:12:06.882: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 9 09:12:06.882: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 9 09:12:06.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 9 09:12:07.097: INFO: stderr: "I0309 09:12:06.999192 312 log.go:172] (0xc00063e000) (0xc0007248c0) Create stream\nI0309 09:12:06.999235 312 log.go:172] (0xc00063e000) (0xc0007248c0) Stream added, broadcasting: 1\nI0309 09:12:07.001380 312 log.go:172] (0xc00063e000) Reply frame received for 1\nI0309 09:12:07.001423 312 log.go:172] (0xc00063e000) (0xc0001e2000) Create stream\nI0309 09:12:07.001435 312 log.go:172] (0xc00063e000) (0xc0001e2000) Stream added, broadcasting: 3\nI0309 09:12:07.002057 312 log.go:172] (0xc00063e000) Reply frame received for 3\nI0309 09:12:07.002083 312 log.go:172] (0xc00063e000) (0xc000724960) Create stream\nI0309 09:12:07.002090 312 log.go:172] (0xc00063e000) (0xc000724960) Stream added, broadcasting: 5\nI0309 09:12:07.002773 312 log.go:172] (0xc00063e000) Reply frame received for 5\nI0309 09:12:07.064369 312 log.go:172] (0xc00063e000) Data frame received for 5\nI0309 09:12:07.064393 312 log.go:172] (0xc000724960) (5) Data frame handling\nI0309 09:12:07.064407 312 log.go:172] (0xc000724960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:12:07.092942 312 log.go:172] (0xc00063e000) Data frame received for 5\nI0309 09:12:07.092970 312 log.go:172] (0xc000724960) (5) Data frame handling\nI0309 09:12:07.093238 312 log.go:172] (0xc00063e000) Data frame received for 3\nI0309 09:12:07.093258 312 log.go:172] (0xc0001e2000) (3) Data frame handling\nI0309 09:12:07.093273 312 log.go:172] (0xc0001e2000) (3) Data frame sent\nI0309 09:12:07.093293 312 log.go:172] (0xc00063e000) Data frame received for 3\nI0309 09:12:07.093299 312 log.go:172] (0xc0001e2000) (3) Data frame handling\nI0309 09:12:07.094613 312 log.go:172] (0xc00063e000) Data frame received for 1\nI0309 09:12:07.094627 312 log.go:172] (0xc0007248c0) (1) Data frame handling\nI0309 09:12:07.094639 312 log.go:172] (0xc0007248c0) (1) Data frame sent\nI0309 09:12:07.094647 312 log.go:172] (0xc00063e000) (0xc0007248c0) Stream removed, broadcasting: 1\nI0309 09:12:07.094703 312 log.go:172] (0xc00063e000) Go away received\nI0309 09:12:07.094868 312 log.go:172] (0xc00063e000) (0xc0007248c0) Stream removed, broadcasting: 1\nI0309 09:12:07.094880 312 log.go:172] (0xc00063e000) (0xc0001e2000) Stream removed, broadcasting: 3\nI0309 09:12:07.094886 312 log.go:172] (0xc00063e000) (0xc000724960) Stream removed, broadcasting: 5\n"
Mar 9 09:12:07.097: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 9 09:12:07.097: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 9 09:12:07.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 9 09:12:07.306: INFO: stderr: "I0309 09:12:07.212789 332 log.go:172] (0xc0009b7760) (0xc000974820) Create stream\nI0309 09:12:07.212862 332 log.go:172] (0xc0009b7760) (0xc000974820) Stream added, broadcasting: 1\nI0309 09:12:07.216572 332 log.go:172] (0xc0009b7760) Reply frame received for 1\nI0309 09:12:07.216606 332 log.go:172] (0xc0009b7760) (0xc000606780) Create stream\nI0309 09:12:07.216614 332 log.go:172] (0xc0009b7760) (0xc000606780) Stream added, broadcasting: 3\nI0309 09:12:07.217098 332 log.go:172] (0xc0009b7760) Reply frame received for 3\nI0309 09:12:07.217125 332 log.go:172] (0xc0009b7760) (0xc000729540) Create stream\nI0309 09:12:07.217136 332 log.go:172] (0xc0009b7760) (0xc000729540) Stream added, broadcasting: 5\nI0309 09:12:07.217689 332 log.go:172] (0xc0009b7760) Reply frame received for 5\nI0309 09:12:07.276770 332 log.go:172] (0xc0009b7760) Data frame received for 5\nI0309 09:12:07.276787 332 log.go:172] (0xc000729540) (5) Data frame handling\nI0309 09:12:07.276796 332 log.go:172] (0xc000729540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0309 09:12:07.302408 332 log.go:172] (0xc0009b7760) Data frame received for 5\nI0309 09:12:07.302431 332 log.go:172] (0xc000729540) (5) Data frame handling\nI0309 09:12:07.302454 332 log.go:172] (0xc0009b7760) Data frame received for 3\nI0309 09:12:07.302477 332 log.go:172] (0xc000606780) (3) Data frame handling\nI0309 09:12:07.302494 332 log.go:172] (0xc000606780) (3) Data frame sent\nI0309 09:12:07.302502 332 log.go:172] (0xc0009b7760) Data frame received for 3\nI0309 09:12:07.302508 332 log.go:172] (0xc000606780) (3) Data frame handling\nI0309 09:12:07.303359 332 log.go:172] (0xc0009b7760) Data frame received for 1\nI0309 09:12:07.303371 332 log.go:172] (0xc000974820) (1) Data frame handling\nI0309 09:12:07.303385 332 log.go:172] (0xc000974820) (1) Data frame sent\nI0309 09:12:07.303395 332 log.go:172] (0xc0009b7760) (0xc000974820) Stream removed, broadcasting: 1\nI0309 09:12:07.303407 332 log.go:172] (0xc0009b7760) Go away received\nI0309 09:12:07.303721 332 log.go:172] (0xc0009b7760) (0xc000974820) Stream removed, broadcasting: 1\nI0309 09:12:07.303738 332 log.go:172] (0xc0009b7760) (0xc000606780) Stream removed, broadcasting: 3\nI0309 09:12:07.303747 332 log.go:172] (0xc0009b7760) (0xc000729540) Stream removed, broadcasting: 5\n"
Mar 9 09:12:07.306: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 9 09:12:07.306: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 9 09:12:07.306: INFO: Waiting for statefulset status.replicas updated to 0
Mar 9 09:12:07.339: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Mar 9 09:12:17.346: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 9 09:12:17.346: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar 9 09:12:17.346: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar 9 09:12:17.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999929s
Mar 9 09:12:18.373: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984059427s
Mar 9 09:12:19.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979418705s
Mar 9 09:12:20.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975385453s
Mar 9 09:12:21.386: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.971223056s
Mar 9 09:12:22.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.96692134s
Mar 9 09:12:23.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962880814s
Mar 9 09:12:24.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958792675s
Mar 9 09:12:25.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954603446s
Mar 9 09:12:26.405: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.34603ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5229
Mar 9 09:12:27.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 9 09:12:27.609: INFO: stderr: "I0309 09:12:27.541532 352 log.go:172] (0xc0009d60b0) (0xc0009ac000) Create stream\nI0309 09:12:27.541578 352 log.go:172] (0xc0009d60b0) (0xc0009ac000) Stream added, broadcasting: 1\nI0309 09:12:27.545471 352 log.go:172] (0xc0009d60b0) Reply frame received for 1\nI0309 09:12:27.545530 352 log.go:172] (0xc0009d60b0) (0xc0009e6000) Create stream\nI0309 09:12:27.545563 352 log.go:172] (0xc0009d60b0) (0xc0009e6000) Stream added, broadcasting: 3\nI0309 09:12:27.548656 352 log.go:172] (0xc0009d60b0) Reply frame received for 3\nI0309 09:12:27.548699 352 log.go:172] (0xc0009d60b0) (0xc000938000) Create stream\nI0309 09:12:27.548721 352 log.go:172] (0xc0009d60b0) (0xc000938000) Stream added, broadcasting: 5\nI0309 09:12:27.549752 352 log.go:172] (0xc0009d60b0) Reply frame received for 5\nI0309 09:12:27.605188 352 log.go:172] (0xc0009d60b0) Data frame received for 3\nI0309 09:12:27.605226 352 log.go:172] (0xc0009d60b0) Data frame received for 5\nI0309 09:12:27.605247 352 log.go:172] (0xc000938000) (5) Data frame handling\nI0309 09:12:27.605257 352 log.go:172] (0xc000938000) (5) Data frame sent\nI0309 09:12:27.605263 352 log.go:172] (0xc0009d60b0) Data frame received for 5\nI0309 09:12:27.605268 352 log.go:172] (0xc000938000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:12:27.605283 352 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0309 09:12:27.605290 352 log.go:172] (0xc0009e6000) (3) Data frame sent\nI0309 09:12:27.605295 352 log.go:172] (0xc0009d60b0) Data frame received for 3\nI0309 09:12:27.605303 352 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0309 09:12:27.606222 352 log.go:172] (0xc0009d60b0) Data frame received for 1\nI0309 09:12:27.606248 352 log.go:172] (0xc0009ac000) (1) Data frame handling\nI0309 09:12:27.606262 352 log.go:172] (0xc0009ac000) (1) Data frame sent\nI0309 09:12:27.606331 352 log.go:172] (0xc0009d60b0) (0xc0009ac000) Stream removed, broadcasting: 1\nI0309 09:12:27.606383 352 log.go:172] (0xc0009d60b0) Go away received\nI0309 09:12:27.606745 352 log.go:172] (0xc0009d60b0) (0xc0009ac000) Stream removed, broadcasting: 1\nI0309 09:12:27.606760 352 log.go:172] (0xc0009d60b0) (0xc0009e6000) Stream removed, broadcasting: 3\nI0309 09:12:27.606773 352 log.go:172] (0xc0009d60b0) (0xc000938000) Stream removed, broadcasting: 5\n"
Mar 9 09:12:27.609: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 9 09:12:27.609: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 9 09:12:27.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 9 09:12:27.768: INFO: stderr: "I0309 09:12:27.708704 372 log.go:172] (0xc00091cb00) (0xc0006cdd60) Create stream\nI0309 09:12:27.708760 372 log.go:172] (0xc00091cb00) (0xc0006cdd60) Stream added, broadcasting: 1\nI0309 09:12:27.710861 372 log.go:172] (0xc00091cb00) Reply frame received for 1\nI0309 09:12:27.710894 372 log.go:172] (0xc00091cb00) (0xc0006cde00) Create stream\nI0309 09:12:27.710909 372 log.go:172] (0xc00091cb00) (0xc0006cde00) Stream added, broadcasting: 3\nI0309 09:12:27.711828 372 log.go:172] (0xc00091cb00) Reply frame received for 3\nI0309 09:12:27.711851 372 log.go:172] (0xc00091cb00) (0xc0006cdea0) Create stream\nI0309 09:12:27.711858 372 log.go:172] (0xc00091cb00) (0xc0006cdea0) Stream added, broadcasting: 5\nI0309 09:12:27.712650 372 log.go:172] (0xc00091cb00) Reply frame received for 5\nI0309 09:12:27.764702 372 log.go:172] (0xc00091cb00) Data frame received for 5\nI0309 09:12:27.764736 372 log.go:172] (0xc0006cdea0) (5) Data frame handling\nI0309 09:12:27.764749 372 log.go:172] (0xc0006cdea0) (5) Data frame sent\nI0309 09:12:27.764760 372 log.go:172] (0xc00091cb00) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:12:27.764774 372 log.go:172] (0xc00091cb00) Data frame received for 3\nI0309 09:12:27.764791 372 log.go:172] (0xc0006cde00) (3) Data frame handling\nI0309 09:12:27.764804 372 log.go:172] (0xc0006cde00) (3) Data frame sent\nI0309 09:12:27.764812 372 log.go:172] (0xc00091cb00) Data frame received for 3\nI0309 09:12:27.764827 372 log.go:172] (0xc0006cde00) (3) Data frame handling\nI0309 09:12:27.764859 372 log.go:172] (0xc0006cdea0) (5) Data frame handling\nI0309 09:12:27.765620 372 log.go:172] (0xc00091cb00) Data frame received for 1\nI0309 09:12:27.765659 372 log.go:172] (0xc0006cdd60) (1) Data frame handling\nI0309 09:12:27.765672 372 log.go:172] (0xc0006cdd60) (1) Data frame sent\nI0309 09:12:27.765691 372 log.go:172] (0xc00091cb00) (0xc0006cdd60) Stream removed, broadcasting: 1\nI0309 09:12:27.765707 372 log.go:172] (0xc00091cb00) Go away received\nI0309 09:12:27.765962 372 log.go:172] (0xc00091cb00) (0xc0006cdd60) Stream removed, broadcasting: 1\nI0309 09:12:27.765978 372 log.go:172] (0xc00091cb00) (0xc0006cde00) Stream removed, broadcasting: 3\nI0309 09:12:27.765985 372 log.go:172] (0xc00091cb00) (0xc0006cdea0) Stream removed, broadcasting: 5\n"
Mar 9 09:12:27.768: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 9 09:12:27.768: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 9 09:12:27.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5229 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 9 09:12:27.929: INFO: stderr: "I0309 09:12:27.864857 392 log.go:172] (0xc000aea630) (0xc0008dc000) Create stream\nI0309 09:12:27.864898 392 log.go:172] (0xc000aea630) (0xc0008dc000) Stream added, broadcasting: 1\nI0309 09:12:27.866822 392 log.go:172] (0xc000aea630) Reply frame received for 1\nI0309 09:12:27.866845 392 log.go:172] (0xc000aea630) (0xc0006f5a40) Create stream\nI0309 09:12:27.866851 392 log.go:172] (0xc000aea630) (0xc0006f5a40) Stream added, broadcasting: 3\nI0309 09:12:27.867642 392 log.go:172] (0xc000aea630) Reply frame received for 3\nI0309 09:12:27.867676 392 log.go:172] (0xc000aea630) (0xc0008dc0a0) Create stream\nI0309 09:12:27.867688 392 log.go:172] (0xc000aea630) (0xc0008dc0a0) Stream added, broadcasting: 5\nI0309 09:12:27.868466 392 log.go:172] (0xc000aea630) Reply frame received for 5\nI0309 09:12:27.924684 392 log.go:172] (0xc000aea630) Data frame received for 5\nI0309 09:12:27.924718 392 log.go:172] (0xc0008dc0a0) (5) Data frame handling\nI0309 09:12:27.924728 392 log.go:172] (0xc0008dc0a0) (5) Data frame sent\nI0309 09:12:27.924735 392 log.go:172] (0xc000aea630) Data frame received for 5\nI0309 09:12:27.924741 392 log.go:172] (0xc0008dc0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0309 09:12:27.924758 392 log.go:172] (0xc000aea630) Data frame received for 3\nI0309 09:12:27.924767 392 log.go:172] (0xc0006f5a40) (3) Data frame handling\nI0309 09:12:27.924777 392 log.go:172] (0xc0006f5a40) (3) Data frame sent\nI0309 09:12:27.924784 392 log.go:172] (0xc000aea630) Data frame received for 3\nI0309 09:12:27.924789 392 log.go:172] (0xc0006f5a40) (3) Data frame handling\nI0309 09:12:27.925956 392 log.go:172] (0xc000aea630) Data frame received for 1\nI0309 09:12:27.925971 392 log.go:172] (0xc0008dc000) (1) Data frame handling\nI0309 09:12:27.925979 392 log.go:172] (0xc0008dc000) (1) Data frame sent\nI0309 09:12:27.925992 392 log.go:172] (0xc000aea630) (0xc0008dc000) Stream removed, broadcasting: 1\nI0309 09:12:27.926010 392 log.go:172] (0xc000aea630) Go away received\nI0309 09:12:27.926312 392 log.go:172] (0xc000aea630) (0xc0008dc000) Stream removed, broadcasting: 1\nI0309 09:12:27.926333 392 log.go:172] (0xc000aea630) (0xc0006f5a40) Stream removed, broadcasting: 3\nI0309 09:12:27.926339 392 log.go:172] (0xc000aea630) (0xc0008dc0a0) Stream removed, broadcasting: 5\n"
Mar 9 09:12:27.929: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 9 09:12:27.929: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 9 09:12:27.929: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar 9 09:12:37.950: INFO: Deleting all statefulset in ns statefulset-5229
Mar 9 09:12:37.953: INFO: Scaling statefulset ss to 0
Mar 9 09:12:37.962: INFO: Waiting for statefulset status.replicas updated to 0
Mar 9 09:12:37.965: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:12:38.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5229" for this suite.
• [SLOW TEST:73.763 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":77,"skipped":1405,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:12:38.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 9 09:12:38.106: INFO: Waiting up to 5m0s for pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261" in namespace "emptydir-1081" to be "success or failure"
Mar 9 09:12:38.134: INFO: Pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261": Phase="Pending", Reason="", readiness=false. Elapsed: 28.200787ms
Mar 9 09:12:40.138: INFO: Pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032119571s
Mar 9 09:12:42.142: INFO: Pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035649932s
STEP: Saw pod success
Mar 9 09:12:42.142: INFO: Pod "pod-1537a0bf-dff2-4a7c-a630-26829f571261" satisfied condition "success or failure"
Mar 9 09:12:42.144: INFO: Trying to get logs from node jerma-worker2 pod pod-1537a0bf-dff2-4a7c-a630-26829f571261 container test-container:
STEP: delete the pod
Mar 9 09:12:42.177: INFO: Waiting for pod pod-1537a0bf-dff2-4a7c-a630-26829f571261 to disappear
Mar 9 09:12:42.183: INFO: Pod pod-1537a0bf-dff2-4a7c-a630-26829f571261 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:12:42.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1081" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1410,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS
should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:12:42.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1876.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1876.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.214.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.214.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.214.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.214.188_tcp@PTR;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1876.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1876.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.214.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.214.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.214.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.214.188_tcp@PTR;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 9 09:12:46.320: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:46.322: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:46.336: INFO: Unable to read jessie_udp@dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:46.340: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:46.342: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:46.358: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local]
Mar 9 09:12:51.520: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:51.538: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:51.646: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:51.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:51.662: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local]
Mar 9 09:12:56.368: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:56.371: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:56.395: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:56.397: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:12:56.416: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local]
Mar 9 09:13:01.369: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:01.398: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:01.423: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:01.426: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:01.444: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local]
Mar 9 09:13:06.368: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:06.370: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:06.393: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:06.395: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:06.412: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local]
Mar 9 09:13:11.368: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:11.371: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:11.392: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:11.395: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local from pod dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c: the server could not find the requested resource (get pods dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c)
Mar 9 09:13:11.409: INFO: Lookups using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1876.svc.cluster.local]
Mar 9 09:13:16.393: INFO: DNS probes using dns-1876/dns-test-d0cbdc3f-b602-4562-a44d-e77e8b2a607c succeeded
STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:13:16.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1876" for this suite.
• [SLOW TEST:34.397 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should provide DNS for services [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":79,"skipped":1440,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
should include webhook resources in discovery documents [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:13:16.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 9 09:13:17.236: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 9 09:13:20.301: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:13:20.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2815" for this suite.
STEP: Destroying namespace "webhook-2815-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":80,"skipped":1445,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
updates the published spec when one version gets renamed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:13:20.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Mar 9 09:13:20.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:13:36.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8040" for this suite.
• [SLOW TEST:16.436 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
updates the published spec when one version gets renamed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":81,"skipped":1447,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI
should update annotations on modification [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:13:36.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Mar 9 09:13:39.499: INFO: Successfully updated pod "annotationupdatef47691f8-4d9f-4ec3-a953-3ece2181d053"
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:13:43.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3401" for this suite.
• [SLOW TEST:6.672 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
should update annotations on modification [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1449,"failed":0}
[k8s.io] Docker Containers
should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:13:43.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Mar 9 09:13:43.649: INFO: Waiting up to 5m0s for pod "client-containers-739d7648-ab21-4d63-91e8-b628c103eb09" in namespace "containers-16" to be "success or failure"
Mar 9 09:13:43.664: INFO: Pod "client-containers-739d7648-ab21-4d63-91e8-b628c103eb09": Phase="Pending", Reason="", readiness=false. Elapsed: 15.514395ms
Mar 9 09:13:45.668: INFO: Pod "client-containers-739d7648-ab21-4d63-91e8-b628c103eb09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019237396s
STEP: Saw pod success
Mar 9 09:13:45.668: INFO: Pod "client-containers-739d7648-ab21-4d63-91e8-b628c103eb09" satisfied condition "success or failure"
Mar 9 09:13:45.670: INFO: Trying to get logs from node jerma-worker2 pod client-containers-739d7648-ab21-4d63-91e8-b628c103eb09 container test-container:
STEP: delete the pod
Mar 9 09:13:45.690: INFO: Waiting for pod client-containers-739d7648-ab21-4d63-91e8-b628c103eb09 to disappear
Mar 9 09:13:45.706: INFO: Pod client-containers-739d7648-ab21-4d63-91e8-b628c103eb09 no longer exists
[AfterEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:13:45.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-16" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1449,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota
should create a ResourceQuota and capture the life of a replica set. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:13:45.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:13:56.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7722" for this suite.
• [SLOW TEST:11.179 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should create a ResourceQuota and capture the life of a replica set. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":84,"skipped":1464,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container
should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:13:56.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 9 09:13:58.981: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:13:59.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5036" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1478,"failed":0}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:13:59.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9029
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 9 09:13:59.085: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 9 09:14:23.200: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.8 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9029 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 09:14:23.200: INFO: >>> kubeConfig: /root/.kube/config
I0309 09:14:23.238743 6 log.go:172] (0xc002c80210) (0xc001486aa0) Create stream
I0309 09:14:23.238776 6 log.go:172] (0xc002c80210) (0xc001486aa0) Stream added, broadcasting: 1
I0309 09:14:23.246204 6 log.go:172] (0xc002c80210) Reply frame received for 1
I0309 09:14:23.246252 6 log.go:172] (0xc002c80210) (0xc000d31860) Create stream
I0309 09:14:23.246281 6 log.go:172] (0xc002c80210) (0xc000d31860) Stream added, broadcasting: 3
I0309 09:14:23.247639 6 log.go:172] (0xc002c80210) Reply frame received for 3
I0309 09:14:23.247691 6 log.go:172] (0xc002c80210) (0xc0023cc3c0) Create stream
I0309 09:14:23.247706 6 log.go:172] (0xc002c80210) (0xc0023cc3c0) Stream added, broadcasting: 5
I0309 09:14:23.248855 6 log.go:172] (0xc002c80210) Reply frame received for 5
I0309 09:14:24.318289 6 log.go:172] (0xc002c80210) Data frame received for 5
I0309 09:14:24.318332 6 log.go:172] (0xc0023cc3c0) (5) Data frame handling
I0309 09:14:24.318353 6 log.go:172] (0xc002c80210) Data frame received for 3
I0309 09:14:24.318367 6 log.go:172] (0xc000d31860) (3) Data frame handling
I0309 09:14:24.318382 6 log.go:172] (0xc000d31860) (3) Data frame sent
I0309 09:14:24.318412 6 log.go:172] (0xc002c80210) Data frame received for 3
I0309 09:14:24.318428 6 log.go:172] (0xc000d31860) (3) Data frame handling
I0309 09:14:24.320076 6 log.go:172] (0xc002c80210) Data frame received for 1
I0309 09:14:24.320098 6 log.go:172] (0xc001486aa0) (1) Data frame handling
I0309 09:14:24.320124 6 log.go:172] (0xc001486aa0) (1) Data frame sent
I0309 09:14:24.320141 6 log.go:172] (0xc002c80210) (0xc001486aa0) Stream removed, broadcasting: 1
I0309 09:14:24.320171 6 log.go:172] (0xc002c80210) Go away received
I0309 09:14:24.320234 6 log.go:172] (0xc002c80210) (0xc001486aa0) Stream removed, broadcasting: 1
I0309 09:14:24.320250 6 log.go:172] (0xc002c80210) (0xc000d31860) Stream removed, broadcasting: 3
I0309 09:14:24.320263 6 log.go:172] (0xc002c80210) (0xc0023cc3c0) Stream removed, broadcasting: 5
Mar 9 09:14:24.320: INFO: Found all expected endpoints: [netserver-0]
Mar 9 09:14:24.323: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.13 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9029 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 09:14:24.323: INFO: >>> kubeConfig: /root/.kube/config
I0309 09:14:24.356479 6 log.go:172] (0xc002c808f0) (0xc001487220) Create stream
I0309 09:14:24.356509 6 log.go:172] (0xc002c808f0) (0xc001487220) Stream added, broadcasting: 1
I0309 09:14:24.358266 6 log.go:172] (0xc002c808f0) Reply frame received for 1
I0309 09:14:24.358309 6 log.go:172] (0xc002c808f0) (0xc0028620a0) Create stream
I0309 09:14:24.358324 6 log.go:172] (0xc002c808f0) (0xc0028620a0) Stream added, broadcasting: 3
I0309 09:14:24.359260 6 log.go:172] (0xc002c808f0) Reply frame received for 3
I0309 09:14:24.359288 6 log.go:172] (0xc002c808f0) (0xc0014872c0) Create stream
I0309 09:14:24.359302 6 log.go:172] (0xc002c808f0) (0xc0014872c0) Stream added, broadcasting: 5
I0309 09:14:24.360191 6 log.go:172] (0xc002c808f0) Reply frame received for 5
I0309 09:14:25.409956 6 log.go:172] (0xc002c808f0) Data frame received for 3
I0309 09:14:25.409996 6 log.go:172] (0xc0028620a0) (3) Data frame handling
I0309 09:14:25.410011 6 log.go:172] (0xc0028620a0) (3) Data frame sent
I0309 09:14:25.410027 6 log.go:172] (0xc002c808f0) Data frame received for 3
I0309 09:14:25.410037 6 log.go:172] (0xc0028620a0) (3) Data frame handling
I0309 09:14:25.410086 6 log.go:172] (0xc002c808f0) Data frame received for 5
I0309 09:14:25.410110 6 log.go:172] (0xc0014872c0) (5) Data frame handling
I0309 09:14:25.411784 6 log.go:172] (0xc002c808f0) Data frame received for 1
I0309 09:14:25.411810 6 log.go:172] (0xc001487220) (1) Data frame handling
I0309 09:14:25.411842 6 log.go:172] (0xc001487220) (1) Data frame sent
I0309 09:14:25.411862 6 log.go:172] (0xc002c808f0) (0xc001487220) Stream removed, broadcasting: 1
I0309 09:14:25.411884 6 log.go:172] (0xc002c808f0) Go away received
I0309 09:14:25.411993 6 log.go:172] (0xc002c808f0) (0xc001487220) Stream removed, broadcasting: 1
I0309 09:14:25.412015 6 log.go:172] (0xc002c808f0) (0xc0028620a0) Stream removed, broadcasting: 3
I0309 09:14:25.412025 6 log.go:172] (0xc002c808f0) (0xc0014872c0) Stream removed, broadcasting: 5
Mar 9 09:14:25.412: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:14:25.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9029" for this suite.
• [SLOW TEST:26.400 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
Granular Checks: Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1483,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:14:25.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 9 09:14:25.488: INFO: Waiting up to 5m0s for pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77" in namespace "emptydir-1567" to be "success or failure"
Mar 9 09:14:25.504: INFO: Pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77": Phase="Pending", Reason="", readiness=false. Elapsed: 15.459984ms
Mar 9 09:14:27.508: INFO: Pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019631189s
Mar 9 09:14:29.512: INFO: Pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02392748s
STEP: Saw pod success
Mar 9 09:14:29.512: INFO: Pod "pod-5bce757a-dfca-4cf7-a456-eebe6b620b77" satisfied condition "success or failure"
Mar 9 09:14:29.515: INFO: Trying to get logs from node jerma-worker2 pod pod-5bce757a-dfca-4cf7-a456-eebe6b620b77 container test-container:
STEP: delete the pod
Mar 9 09:14:29.541: INFO: Waiting for pod pod-5bce757a-dfca-4cf7-a456-eebe6b620b77 to disappear
Mar 9 09:14:29.545: INFO: Pod pod-5bce757a-dfca-4cf7-a456-eebe6b620b77 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:14:29.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1567" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1488,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota
should verify ResourceQuota with terminating scopes. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:14:29.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:14:45.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6150" for this suite.
• [SLOW TEST:16.289 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should verify ResourceQuota with terminating scopes. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":88,"skipped":1489,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret
should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:14:45.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-578f112a-828c-4c34-92f3-190e4c3f5eee
STEP: Creating a pod to test consume secrets
Mar 9 09:14:45.914: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e" in namespace "projected-4360" to be "success or failure"
Mar 9 09:14:45.962: INFO: Pod "pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 47.506422ms
Mar 9 09:14:47.966: INFO: Pod "pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.051916602s
STEP: Saw pod success
Mar 9 09:14:47.966: INFO: Pod "pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e" satisfied condition "success or failure"
Mar 9 09:14:47.969: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e container projected-secret-volume-test:
STEP: delete the pod
Mar 9 09:14:47.991: INFO: Waiting for pod pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e to disappear
Mar 9 09:14:48.007: INFO: Pod pod-projected-secrets-4720af30-ea50-4789-bfe1-174f1c8c3d6e no longer exists
[AfterEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:14:48.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4360" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1504,"failed":0}
------------------------------
[sig-storage] Projected downwardAPI
should provide podname only [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:14:48.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 9 09:14:48.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0" in namespace "projected-4624" to be "success or failure"
Mar 9 09:14:48.133: INFO: Pod "downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.207862ms
Mar 9 09:14:50.137: INFO: Pod "downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014719037s
STEP: Saw pod success
Mar 9 09:14:50.137: INFO: Pod "downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0" satisfied condition "success or failure"
Mar 9 09:14:50.139: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0 container client-container:
STEP: delete the pod
Mar 9 09:14:50.164: INFO: Waiting for pod downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0 to disappear
Mar 9 09:14:50.198: INFO: Pod downwardapi-volume-ed267740-e4b8-49ff-a269-013cc4ac09f0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:14:50.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4624" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1504,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance]
should not start app containers if init containers fail on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:14:50.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar 9 09:14:50.266: INFO: PodSpec: initContainers in spec.initContainers
Mar 9 09:15:40.739: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d5c7c6a2-7728-44db-924e-898d82de540e", GenerateName:"", Namespace:"init-container-4546", SelfLink:"/api/v1/namespaces/init-container-4546/pods/pod-init-d5c7c6a2-7728-44db-924e-898d82de540e", UID:"e4a6a1b7-9b41-44ac-991d-885e81f442a9", ResourceVersion:"265659", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"266036703"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8tvxk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0025a6000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tvxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tvxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8tvxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002ad4068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020c0240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ad40f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ad4110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002ad4118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002ad411c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719342090, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.18", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.18"}}, StartTime:(*v1.Time)(0xc001cd20a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010740e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001074150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://97e1c4a7337b8c7b2015ac56f5d34878b593c5aeddf07e7652a7b074ebaeea67", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001cd2180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001cd2100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002ad419f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:15:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4546" for this suite.
• [SLOW TEST:50.575 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
should not start app containers if init containers fail on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":91,"skipped":1507,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services
should be able to create a functioning NodePort service [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:15:40.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-1363
STEP: creating replication controller nodeport-test in namespace services-1363
I0309 09:15:40.907737 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1363, replica count: 2
I0309 09:15:43.958211 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
Mar 9 09:15:43.958: INFO: Creating new exec pod
Mar 9 09:15:46.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1363 execpodqkk96 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Mar 9 09:15:47.238: INFO: stderr: "I0309 09:15:47.159227 412 log.go:172] (0xc000592000) (0xc0006ba780) Create stream\nI0309 09:15:47.159285 412 log.go:172] (0xc000592000) (0xc0006ba780) Stream added, broadcasting: 1\nI0309 09:15:47.161919 412 log.go:172] (0xc000592000) Reply frame received for 1\nI0309 09:15:47.161955 412 log.go:172] (0xc000592000) (0xc0004b3540) Create stream\nI0309 09:15:47.161967 412 log.go:172] (0xc000592000) (0xc0004b3540) Stream added, broadcasting: 3\nI0309 09:15:47.163274 412 log.go:172] (0xc000592000) Reply frame received for 3\nI0309 09:15:47.163297 412 log.go:172] (0xc000592000) (0xc0004b35e0) Create stream\nI0309 09:15:47.163307 412 log.go:172] (0xc000592000) (0xc0004b35e0) Stream added, broadcasting: 5\nI0309 09:15:47.164628 412 log.go:172] (0xc000592000) Reply frame received for 5\nI0309 09:15:47.233056 412 log.go:172] (0xc000592000) Data frame received for 5\nI0309 09:15:47.233083 412 log.go:172] (0xc0004b35e0) (5) Data frame handling\nI0309 09:15:47.233097 412 log.go:172] (0xc0004b35e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0309 09:15:47.233235 412 log.go:172] (0xc000592000) Data frame received for 5\nI0309 09:15:47.233247 412 log.go:172] (0xc0004b35e0) (5) Data frame handling\nI0309 09:15:47.233257 412 log.go:172] (0xc0004b35e0) (5) Data frame sent\nI0309 09:15:47.233263 412 log.go:172] (0xc000592000) Data frame received for 5\nI0309 09:15:47.233275 412 log.go:172] (0xc0004b35e0) (5) Data frame handling\nI0309 09:15:47.233284 412 log.go:172] (0xc000592000) Data frame received for 3\nI0309 09:15:47.233293 412 log.go:172] (0xc0004b3540) (3) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0309 09:15:47.235033 412 log.go:172] (0xc000592000) Data frame received for 1\nI0309 09:15:47.235066 412 log.go:172] (0xc0006ba780) (1) Data frame handling\nI0309 09:15:47.235079 412 log.go:172] (0xc0006ba780) (1) Data frame sent\nI0309 09:15:47.235098 412 log.go:172] (0xc000592000) (0xc0006ba780) Stream removed, broadcasting: 1\nI0309 09:15:47.235122 412 log.go:172] (0xc000592000) Go away received\nI0309 09:15:47.235418 412 log.go:172] (0xc000592000) (0xc0006ba780) Stream removed, broadcasting: 1\nI0309 09:15:47.235437 412 log.go:172] (0xc000592000) (0xc0004b3540) Stream removed, broadcasting: 3\nI0309 09:15:47.235445 412 log.go:172] (0xc000592000) (0xc0004b35e0) Stream removed, broadcasting: 5\n"
Mar 9 09:15:47.238: INFO: stdout: ""
Mar 9 09:15:47.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1363 execpodqkk96 -- /bin/sh -x -c nc -zv -t -w 2 10.106.66.156 80'
Mar 9 09:15:47.422: INFO: stderr: "I0309 09:15:47.355220 432 log.go:172] (0xc000b72630) (0xc0008e8000) Create stream\nI0309 09:15:47.355267 432 log.go:172] (0xc000b72630) (0xc0008e8000) Stream added, broadcasting: 1\nI0309 09:15:47.356963 432 log.go:172] (0xc000b72630) Reply frame received for 1\nI0309 09:15:47.356998 432 log.go:172] (0xc000b72630) (0xc000711b80) Create stream\nI0309 09:15:47.357005 432 log.go:172] (0xc000b72630) (0xc000711b80) Stream added, broadcasting: 3\nI0309 09:15:47.357780 432 log.go:172] (0xc000b72630) Reply frame received for 3\nI0309 09:15:47.357838 432 log.go:172] (0xc000b72630) (0xc0008e80a0) Create stream\nI0309 09:15:47.357861 432 log.go:172] (0xc000b72630) (0xc0008e80a0) Stream added, broadcasting: 5\nI0309 09:15:47.358596 432 log.go:172] (0xc000b72630) Reply frame received for 5\nI0309 09:15:47.416212 432 log.go:172] (0xc000b72630) Data frame received for 5\nI0309 09:15:47.416242 432 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0309 09:15:47.416251 432 log.go:172] (0xc0008e80a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.66.156 80\nI0309 09:15:47.416302 432 log.go:172] (0xc000b72630) Data frame received for 5\nI0309 09:15:47.416309 432 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0309 09:15:47.416315 432 log.go:172] (0xc0008e80a0) (5) Data frame sent\nConnection to 10.106.66.156 80 port [tcp/http] succeeded!\nI0309 09:15:47.416623 432 log.go:172] (0xc000b72630) Data frame received for 5\nI0309 09:15:47.416638 432 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0309 09:15:47.417119 432 log.go:172] (0xc000b72630) Data frame received for 3\nI0309 09:15:47.417148 432 log.go:172] (0xc000711b80) (3) Data frame handling\nI0309 09:15:47.418337 432 log.go:172] (0xc000b72630) Data frame received for 1\nI0309 09:15:47.418388 432 log.go:172] (0xc0008e8000) (1) Data frame handling\nI0309 09:15:47.418412 432 log.go:172] (0xc0008e8000) (1) Data frame sent\nI0309 09:15:47.418428 432 log.go:172] (0xc000b72630) (0xc0008e8000) Stream removed, broadcasting: 1\nI0309 09:15:47.418442 432 log.go:172] (0xc000b72630) Go away received\nI0309 09:15:47.418748 432 log.go:172] (0xc000b72630) (0xc0008e8000) Stream removed, broadcasting: 1\nI0309 09:15:47.418765 432 log.go:172] (0xc000b72630) (0xc000711b80) Stream removed, broadcasting: 3\nI0309 09:15:47.418776 432 log.go:172] (0xc000b72630) (0xc0008e80a0) Stream removed, broadcasting: 5\n"
Mar 9 09:15:47.422: INFO: stdout: ""
Mar 9 09:15:47.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1363 execpodqkk96 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 31556'
Mar 9 09:15:47.593: INFO: stderr: "I0309 09:15:47.523862 453 log.go:172] (0xc000ab13f0) (0xc000a82780) Create stream\nI0309 09:15:47.523902 453 log.go:172] (0xc000ab13f0) (0xc000a82780) Stream added, broadcasting: 1\nI0309 09:15:47.525140 453 log.go:172] (0xc000ab13f0) Reply frame received for 1\nI0309 09:15:47.525167 453 log.go:172] (0xc000ab13f0) (0xc0004f5400) Create stream\nI0309 09:15:47.525175 453 log.go:172] (0xc000ab13f0) (0xc0004f5400) Stream added, broadcasting: 3\nI0309 09:15:47.526084 453 log.go:172] (0xc000ab13f0) Reply frame received for 3\nI0309 09:15:47.526177 453 log.go:172] (0xc000ab13f0) (0xc000650640) Create stream\nI0309 09:15:47.526189 453 log.go:172] (0xc000ab13f0) (0xc000650640) Stream added, broadcasting: 5\nI0309 09:15:47.526799 453 log.go:172] (0xc000ab13f0) Reply frame received for 5\nI0309 09:15:47.588235 453 log.go:172] (0xc000ab13f0) Data frame received for 5\nI0309 09:15:47.588269 453 log.go:172] (0xc000650640) (5) Data frame handling\nI0309 09:15:47.588282 453 log.go:172] (0xc000650640) (5) Data frame sent\nI0309 09:15:47.588292 453 log.go:172] (0xc000ab13f0) Data frame received for 5\nI0309 09:15:47.588299 453 log.go:172] (0xc000650640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.4 31556\nConnection to 172.17.0.4 31556 port [tcp/31556] succeeded!\nI0309 09:15:47.588310 453 log.go:172] (0xc000ab13f0) Data frame received for 3\nI0309 09:15:47.588365 453 log.go:172] (0xc0004f5400) (3) Data frame handling\nI0309 09:15:47.588399 453 log.go:172] (0xc000650640) (5) Data frame sent\nI0309 09:15:47.588741 453 log.go:172] (0xc000ab13f0) Data frame received for 5\nI0309 09:15:47.588764 453 log.go:172] (0xc000650640) (5) Data frame handling\nI0309 09:15:47.590255 453 log.go:172] (0xc000ab13f0) Data frame received for 1\nI0309 09:15:47.590283 453 log.go:172] (0xc000a82780) (1) Data frame handling\nI0309 09:15:47.590293 453 log.go:172] (0xc000a82780) (1) Data frame sent\nI0309 09:15:47.590310 453 log.go:172] (0xc000ab13f0) (0xc000a82780) Stream removed, broadcasting: 1\nI0309 09:15:47.590328 453 log.go:172] (0xc000ab13f0) Go away received\nI0309 09:15:47.590638 453 log.go:172] (0xc000ab13f0) (0xc000a82780) Stream removed, broadcasting: 1\nI0309 09:15:47.590654 453 log.go:172] (0xc000ab13f0) (0xc0004f5400) Stream removed, broadcasting: 3\nI0309 09:15:47.590663 453 log.go:172] (0xc000ab13f0) (0xc000650640) Stream removed, broadcasting: 5\n"
Mar 9 09:15:47.593: INFO: stdout: ""
Mar 9 09:15:47.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1363 execpodqkk96 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 31556'
Mar 9 09:15:47.785: INFO: stderr: "I0309 09:15:47.718515 473 log.go:172] (0xc00010ca50) (0xc000689cc0) Create stream\nI0309 09:15:47.718551 473 log.go:172] (0xc00010ca50) (0xc000689cc0) Stream added, broadcasting: 1\nI0309 09:15:47.720520 473 log.go:172] (0xc00010ca50) Reply frame received for 1\nI0309 09:15:47.720545 473 log.go:172] (0xc00010ca50) (0xc000a22000) Create stream\nI0309 09:15:47.720556 473 log.go:172] (0xc00010ca50) (0xc000a22000) Stream added, broadcasting: 3\nI0309 09:15:47.721202 473 log.go:172] (0xc00010ca50) Reply frame received for 3\nI0309 09:15:47.721236 473 log.go:172] (0xc00010ca50) (0xc000a220a0) Create stream\nI0309 09:15:47.721248 473 log.go:172] (0xc00010ca50) (0xc000a220a0) Stream added, broadcasting: 5\nI0309 09:15:47.722269 473 log.go:172] (0xc00010ca50) Reply frame received for 5\nI0309 09:15:47.780906 473 log.go:172] (0xc00010ca50) Data frame received for 5\nI0309 09:15:47.780940 473 log.go:172] (0xc000a220a0) (5) Data frame handling\nI0309 09:15:47.780952 473 log.go:172] (0xc000a220a0) (5) Data frame sent\nI0309 09:15:47.780961 473 log.go:172] (0xc00010ca50) Data frame received for 5\nI0309 09:15:47.780970 473 log.go:172] (0xc000a220a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.5 31556\nConnection to 172.17.0.5 31556 port [tcp/31556] succeeded!\nI0309 09:15:47.780992 473 log.go:172] (0xc00010ca50) Data frame received for 3\nI0309 09:15:47.781000 473 log.go:172] (0xc000a22000) (3) Data frame handling\nI0309 09:15:47.782597 473 log.go:172] (0xc00010ca50) Data frame received for 1\nI0309 09:15:47.782618 473 log.go:172] (0xc000689cc0) (1) Data frame handling\nI0309 09:15:47.782629 473 log.go:172] (0xc000689cc0) (1) Data frame sent\nI0309 09:15:47.782643 473 log.go:172] (0xc00010ca50) (0xc000689cc0) Stream removed, broadcasting: 1\nI0309 09:15:47.782661 473 log.go:172] (0xc00010ca50) Go away received\nI0309 09:15:47.782951 473 log.go:172] (0xc00010ca50) (0xc000689cc0) Stream removed, broadcasting: 1\nI0309 09:15:47.782968 473 log.go:172] (0xc00010ca50) (0xc000a22000) Stream removed, broadcasting: 3\nI0309 09:15:47.782975 473 log.go:172] (0xc00010ca50) (0xc000a220a0) Stream removed, broadcasting: 5\n"
Mar 9 09:15:47.786: INFO: stdout: ""
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:15:47.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1363" for this suite.
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
• [SLOW TEST:7.013 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should be able to create a functioning NodePort service [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":92,"skipped":1518,"failed":0}
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container
should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:15:47.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 9 09:15:49.930: INFO: Expected: &{} to match Container's Termination Message: --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:15:49.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1896" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion
should allow composing env vars into new env vars [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:15:49.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Mar 9 09:15:50.083: INFO: Waiting up to 5m0s for pod "var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d" in namespace "var-expansion-6898" to be "success or failure"
Mar 9 09:15:50.087: INFO: Pod "var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233926ms
Mar 9 09:15:52.091: INFO: Pod "var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008222992s
STEP: Saw pod success
Mar 9 09:15:52.091: INFO: Pod "var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d" satisfied condition "success or failure"
Mar 9 09:15:52.094: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d container dapi-container:
STEP: delete the pod
Mar 9 09:15:52.139: INFO: Waiting for pod var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d to disappear
Mar 9 09:15:52.147: INFO: Pod var-expansion-8b19f262-4a7a-447c-972d-8a58d3f24f3d no longer exists
[AfterEach] [k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:15:52.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6898" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1551,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI
should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:15:52.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 9 09:15:52.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2" in namespace "projected-1246" to be "success or failure"
Mar 9 09:15:52.238: INFO: Pod "downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447792ms
Mar 9 09:15:54.250: INFO: Pod "downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013846333s
STEP: Saw pod success
Mar 9 09:15:54.250: INFO: Pod "downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2" satisfied condition "success or failure"
Mar 9 09:15:54.253: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2 container client-container:
STEP: delete the pod
Mar 9 09:15:54.283: INFO: Waiting for pod downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2 to disappear
Mar 9 09:15:54.291: INFO: Pod downwardapi-volume-17b53159-a468-4db1-8828-be9b45dd4ce2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:15:54.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1246" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1582,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition
creating/deleting custom resource definition objects works [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:15:54.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:15:54.388: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:15:55.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3414" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":96,"skipped":1591,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Lease
lease API should be available [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:15:55.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:15:55.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-4815" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":97,"skipped":1604,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container
with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:15:55.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:16:55.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1091" for this suite.
• [SLOW TEST:60.097 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1617,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
works for CRD without validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:16:55.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:16:55.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 9 09:16:58.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6207 create -f -'
Mar 9 09:17:00.538: INFO: stderr: ""
Mar 9 09:17:00.538: INFO: stdout: "e2e-test-crd-publish-openapi-6406-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 9 09:17:00.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6207 delete e2e-test-crd-publish-openapi-6406-crds test-cr'
Mar 9 09:17:00.641: INFO: stderr: ""
Mar 9 09:17:00.642: INFO: stdout: "e2e-test-crd-publish-openapi-6406-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Mar 9 09:17:00.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6207 apply -f -'
Mar 9 09:17:00.909: INFO: stderr: ""
Mar 9 09:17:00.909: INFO: stdout: "e2e-test-crd-publish-openapi-6406-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 9 09:17:00.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6207 delete e2e-test-crd-publish-openapi-6406-crds test-cr'
Mar 9 09:17:00.994: INFO: stderr: ""
Mar 9 09:17:00.994: INFO: stdout: "e2e-test-crd-publish-openapi-6406-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Mar 9 09:17:00.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6406-crds'
Mar 9 09:17:01.238: INFO: stderr: ""
Mar 9 09:17:01.238: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6406-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:17:03.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6207" for this suite.
• [SLOW TEST:8.302 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
works for CRD without validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":99,"skipped":1660,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:17:04.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 9 09:17:04.077: INFO: Waiting up to 5m0s for pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab" in namespace "emptydir-32" to be "success or failure"
Mar 9 09:17:04.083: INFO: Pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab": Phase="Pending", Reason="", readiness=false. Elapsed: 5.477627ms
Mar 9 09:17:06.093: INFO: Pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015913726s
Mar 9 09:17:08.097: INFO: Pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019895329s
STEP: Saw pod success
Mar 9 09:17:08.097: INFO: Pod "pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab" satisfied condition "success or failure"
Mar 9 09:17:08.100: INFO: Trying to get logs from node jerma-worker2 pod pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab container test-container:
STEP: delete the pod
Mar 9 09:17:08.135: INFO: Waiting for pod pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab to disappear
Mar 9 09:17:08.143: INFO: Pod pod-97f3c0c0-5ac1-4afb-91c0-c56abf556dab no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:17:08.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-32" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1670,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
should have a working scale subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:17:08.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7306
[It] should have a working scale subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-7306
Mar 9 09:17:08.210: INFO: Found 0 stateful pods, waiting for 1
Mar 9 09:17:18.214: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar 9 09:17:18.238: INFO: Deleting all statefulset in ns statefulset-7306
Mar 9 09:17:18.244: INFO: Scaling statefulset ss to 0
Mar 9 09:17:38.322: INFO: Waiting for statefulset status.replicas updated to 0
Mar 9 09:17:38.325: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:17:38.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7306" for this suite.
• [SLOW TEST:30.195 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
should have a working scale subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":101,"skipped":1754,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods
should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:17:38.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4790
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 9 09:17:38.442: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 9 09:17:54.593: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.15:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4790 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 09:17:54.593: INFO: >>> kubeConfig: /root/.kube/config
I0309 09:17:54.626512 6 log.go:172] (0xc001372420) (0xc000e8f220) Create stream
I0309 09:17:54.626547 6 log.go:172] (0xc001372420) (0xc000e8f220) Stream added, broadcasting: 1
I0309 09:17:54.629069 6 log.go:172] (0xc001372420) Reply frame received for 1
I0309 09:17:54.629123 6 log.go:172] (0xc001372420) (0xc0014d6d20) Create stream
I0309 09:17:54.629140 6 log.go:172] (0xc001372420) (0xc0014d6d20) Stream added, broadcasting: 3
I0309 09:17:54.630467 6 log.go:172] (0xc001372420) Reply frame received for 3
I0309 09:17:54.630517 6 log.go:172] (0xc001372420) (0xc001e9fc20) Create stream
I0309 09:17:54.630532 6 log.go:172] (0xc001372420) (0xc001e9fc20) Stream added, broadcasting: 5
I0309 09:17:54.631703 6 log.go:172] (0xc001372420) Reply frame received for 5
I0309 09:17:54.704388 6 log.go:172] (0xc001372420) Data frame received for 3
I0309 09:17:54.704426 6 log.go:172] (0xc0014d6d20) (3) Data frame handling
I0309 09:17:54.704438 6 log.go:172] (0xc0014d6d20) (3) Data frame sent
I0309 09:17:54.704474 6 log.go:172] (0xc001372420) Data frame received for 5
I0309 09:17:54.704525 6 log.go:172] (0xc001e9fc20) (5) Data frame handling
I0309 09:17:54.704570 6 log.go:172] (0xc001372420) Data frame received for 3
I0309 09:17:54.704596 6 log.go:172] (0xc0014d6d20) (3) Data frame handling
I0309 09:17:54.706569 6 log.go:172] (0xc001372420) Data frame received for 1
I0309 09:17:54.706614 6 log.go:172] (0xc000e8f220) (1) Data frame handling
I0309 09:17:54.706642 6 log.go:172] (0xc000e8f220) (1) Data frame sent
I0309 09:17:54.706665 6 log.go:172] (0xc001372420) (0xc000e8f220) Stream removed, broadcasting: 1
I0309 09:17:54.706782 6 log.go:172] (0xc001372420) Go away received
I0309 09:17:54.706858 6 log.go:172] (0xc001372420) (0xc000e8f220) Stream removed, broadcasting: 1
I0309 09:17:54.706893 6 log.go:172] (0xc001372420) (0xc0014d6d20) Stream removed, broadcasting: 3
I0309 09:17:54.706910 6 log.go:172] (0xc001372420) (0xc001e9fc20) Stream removed, broadcasting: 5
Mar 9 09:17:54.706: INFO: Found all expected endpoints: [netserver-0]
Mar 9 09:17:54.710: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.25:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4790 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 9 09:17:54.710: INFO: >>> kubeConfig: /root/.kube/config
I0309 09:17:54.743767 6 log.go:172] (0xc0016080b0) (0xc0014d7cc0) Create stream
I0309 09:17:54.743794 6 log.go:172] (0xc0016080b0) (0xc0014d7cc0) Stream added, broadcasting: 1
I0309 09:17:54.745984 6 log.go:172] (0xc0016080b0) Reply frame received for 1
I0309 09:17:54.746031 6 log.go:172] (0xc0016080b0) (0xc0014863c0) Create stream
I0309 09:17:54.746065 6 log.go:172] (0xc0016080b0) (0xc0014863c0) Stream added, broadcasting: 3
I0309 09:17:54.746944 6 log.go:172] (0xc0016080b0) Reply frame received for 3
I0309 09:17:54.746979 6 log.go:172] (0xc0016080b0) (0xc0014865a0) Create stream
I0309 09:17:54.746990 6 log.go:172] (0xc0016080b0) (0xc0014865a0) Stream added, broadcasting: 5
I0309 09:17:54.747761 6 log.go:172] (0xc0016080b0) Reply frame received for 5
I0309 09:17:54.833719 6 log.go:172] (0xc0016080b0) Data frame received for 3
I0309 09:17:54.833750 6 log.go:172] (0xc0014863c0) (3) Data frame handling
I0309 09:17:54.833770 6 log.go:172] (0xc0014863c0) (3) Data frame sent
I0309 09:17:54.833781 6 log.go:172] (0xc0016080b0) Data frame received for 3
I0309 09:17:54.833791 6 log.go:172] (0xc0014863c0) (3) Data frame handling
I0309 09:17:54.834268 6 log.go:172] (0xc0016080b0) Data frame received for 5
I0309 09:17:54.834307 6 log.go:172] (0xc0014865a0) (5) Data frame handling
I0309 09:17:54.835713 6 log.go:172] (0xc0016080b0) Data frame received for 1
I0309 09:17:54.835744 6 log.go:172] (0xc0014d7cc0) (1) Data frame handling
I0309 09:17:54.835769 6 log.go:172] (0xc0014d7cc0) (1) Data frame sent
I0309 09:17:54.835788 6 log.go:172] (0xc0016080b0) (0xc0014d7cc0) Stream removed, broadcasting: 1
I0309 09:17:54.835806 6 log.go:172] (0xc0016080b0) Go away received
I0309 09:17:54.835943 6 log.go:172] (0xc0016080b0) (0xc0014d7cc0) Stream removed, broadcasting: 1
I0309 09:17:54.835967 6 log.go:172] (0xc0016080b0) (0xc0014863c0) Stream removed, broadcasting: 3
I0309 09:17:54.835980 6 log.go:172] (0xc0016080b0) (0xc0014865a0) Stream removed, broadcasting: 5
Mar 9 09:17:54.835: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:17:54.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4790" for this suite.
• [SLOW TEST:16.498 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
Granular Checks: Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1773,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController
should release no longer matching pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:17:54.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Mar 9 09:17:54.939: INFO: Pod name pod-release: Found 0 pods out of 1
Mar 9 09:17:59.943: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:00.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3499" for this suite.
• [SLOW TEST:6.124 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
should release no longer matching pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":103,"skipped":1805,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret
should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:00.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-ad013205-c473-45d8-8e2f-8d7bc2a78176
STEP: Creating a pod to test consume secrets
Mar 9 09:18:01.082: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007" in namespace "projected-3249" to be "success or failure"
Mar 9 09:18:01.085: INFO: Pod "pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.62895ms
Mar 9 09:18:03.089: INFO: Pod "pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007298809s
STEP: Saw pod success
Mar 9 09:18:03.089: INFO: Pod "pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007" satisfied condition "success or failure"
Mar 9 09:18:03.091: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007 container secret-volume-test:
STEP: delete the pod
Mar 9 09:18:03.111: INFO: Waiting for pod pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007 to disappear
Mar 9 09:18:03.115: INFO: Pod pod-projected-secrets-af130da7-2dcc-4bee-9821-3a9e768c9007 no longer exists
[AfterEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:03.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3249" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1876,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap
should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:03.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-de23d10c-a8ea-468f-8b1c-f64c971cd5e7
STEP: Creating a pod to test consume configMaps
Mar 9 09:18:03.314: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6" in namespace "projected-1812" to be "success or failure"
Mar 9 09:18:03.325: INFO: Pod "pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.460577ms
Mar 9 09:18:05.328: INFO: Pod "pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013995481s
STEP: Saw pod success
Mar 9 09:18:05.328: INFO: Pod "pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6" satisfied condition "success or failure"
Mar 9 09:18:05.332: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6 container projected-configmap-volume-test:
STEP: delete the pod
Mar 9 09:18:05.356: INFO: Waiting for pod pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6 to disappear
Mar 9 09:18:05.411: INFO: Pod pod-projected-configmaps-ad454ce2-7d9d-4339-a154-ea748388b2e6 no longer exists
[AfterEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:05.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1812" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1880,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
patching/updating a mutating webhook should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:05.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 9 09:18:06.084: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 9 09:18:09.144: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:09.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2036" for this suite.
STEP: Destroying namespace "webhook-2036-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":106,"skipped":1883,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota
should be able to update and delete ResourceQuota. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:09.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:09.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-935" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":107,"skipped":1902,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job
should create a job from an image when restart is OnFailure [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:09.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl run job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788
[It] should create a job from an image when restart is OnFailure [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 9 09:18:09.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5068'
Mar 9 09:18:09.892: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 9 09:18:09.892: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793
Mar 9 09:18:09.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5068'
Mar 9 09:18:10.021: INFO: stderr: ""
Mar 9 09:18:10.021: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:10.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5068" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":108,"skipped":1909,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo
should scale a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:10.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Update Demo
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
[It] should scale a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Mar 9 09:18:10.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5846'
Mar 9 09:18:10.292: INFO: stderr: ""
Mar 9 09:18:10.292: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 9 09:18:10.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846'
Mar 9 09:18:10.416: INFO: stderr: ""
Mar 9 09:18:10.416: INFO: stdout: "update-demo-nautilus-jnm45 update-demo-nautilus-sfcnt "
Mar 9 09:18:10.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnm45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:10.509: INFO: stderr: ""
Mar 9 09:18:10.509: INFO: stdout: ""
Mar 9 09:18:10.509: INFO: update-demo-nautilus-jnm45 is created but not running
Mar 9 09:18:15.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846'
Mar 9 09:18:15.619: INFO: stderr: ""
Mar 9 09:18:15.619: INFO: stdout: "update-demo-nautilus-jnm45 update-demo-nautilus-sfcnt "
Mar 9 09:18:15.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnm45 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:15.720: INFO: stderr: ""
Mar 9 09:18:15.720: INFO: stdout: "true"
Mar 9 09:18:15.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jnm45 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:15.816: INFO: stderr: ""
Mar 9 09:18:15.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 9 09:18:15.816: INFO: validating pod update-demo-nautilus-jnm45
Mar 9 09:18:15.820: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 9 09:18:15.820: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 9 09:18:15.820: INFO: update-demo-nautilus-jnm45 is verified up and running
Mar 9 09:18:15.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:15.890: INFO: stderr: ""
Mar 9 09:18:15.890: INFO: stdout: "true"
Mar 9 09:18:15.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:15.981: INFO: stderr: ""
Mar 9 09:18:15.981: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 9 09:18:15.982: INFO: validating pod update-demo-nautilus-sfcnt
Mar 9 09:18:15.985: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 9 09:18:15.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 9 09:18:15.985: INFO: update-demo-nautilus-sfcnt is verified up and running
STEP: scaling down the replication controller
Mar 9 09:18:15.988: INFO: scanned /root for discovery docs:
Mar 9 09:18:15.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5846'
Mar 9 09:18:17.137: INFO: stderr: ""
Mar 9 09:18:17.137: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 9 09:18:17.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846'
Mar 9 09:18:17.253: INFO: stderr: ""
Mar 9 09:18:17.253: INFO: stdout: "update-demo-nautilus-jnm45 update-demo-nautilus-sfcnt "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 9 09:18:22.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846'
Mar 9 09:18:22.378: INFO: stderr: ""
Mar 9 09:18:22.379: INFO: stdout: "update-demo-nautilus-jnm45 update-demo-nautilus-sfcnt "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 9 09:18:27.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846'
Mar 9 09:18:27.458: INFO: stderr: ""
Mar 9 09:18:27.458: INFO: stdout: "update-demo-nautilus-sfcnt "
Mar 9 09:18:27.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:27.529: INFO: stderr: ""
Mar 9 09:18:27.529: INFO: stdout: "true"
Mar 9 09:18:27.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:27.602: INFO: stderr: ""
Mar 9 09:18:27.602: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 9 09:18:27.602: INFO: validating pod update-demo-nautilus-sfcnt
Mar 9 09:18:27.612: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 9 09:18:27.612: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 9 09:18:27.612: INFO: update-demo-nautilus-sfcnt is verified up and running
STEP: scaling up the replication controller
Mar 9 09:18:27.614: INFO: scanned /root for discovery docs:
Mar 9 09:18:27.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5846'
Mar 9 09:18:28.750: INFO: stderr: ""
Mar 9 09:18:28.750: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 9 09:18:28.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846'
Mar 9 09:18:28.853: INFO: stderr: ""
Mar 9 09:18:28.853: INFO: stdout: "update-demo-nautilus-sfcnt update-demo-nautilus-sp5xk "
Mar 9 09:18:28.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:28.957: INFO: stderr: ""
Mar 9 09:18:28.957: INFO: stdout: "true"
Mar 9 09:18:28.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:29.034: INFO: stderr: ""
Mar 9 09:18:29.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 9 09:18:29.034: INFO: validating pod update-demo-nautilus-sfcnt
Mar 9 09:18:29.036: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 9 09:18:29.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 9 09:18:29.036: INFO: update-demo-nautilus-sfcnt is verified up and running
Mar 9 09:18:29.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5xk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:29.105: INFO: stderr: ""
Mar 9 09:18:29.105: INFO: stdout: ""
Mar 9 09:18:29.105: INFO: update-demo-nautilus-sp5xk is created but not running
Mar 9 09:18:34.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5846'
Mar 9 09:18:34.226: INFO: stderr: ""
Mar 9 09:18:34.226: INFO: stdout: "update-demo-nautilus-sfcnt update-demo-nautilus-sp5xk "
Mar 9 09:18:34.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:34.327: INFO: stderr: ""
Mar 9 09:18:34.327: INFO: stdout: "true"
Mar 9 09:18:34.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfcnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:34.394: INFO: stderr: ""
Mar 9 09:18:34.394: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 9 09:18:34.394: INFO: validating pod update-demo-nautilus-sfcnt
Mar 9 09:18:34.397: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 9 09:18:34.397: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 9 09:18:34.397: INFO: update-demo-nautilus-sfcnt is verified up and running
Mar 9 09:18:34.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5xk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:34.481: INFO: stderr: ""
Mar 9 09:18:34.481: INFO: stdout: "true"
Mar 9 09:18:34.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp5xk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5846'
Mar 9 09:18:34.546: INFO: stderr: ""
Mar 9 09:18:34.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 9 09:18:34.546: INFO: validating pod update-demo-nautilus-sp5xk
Mar 9 09:18:34.549: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 9 09:18:34.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 9 09:18:34.549: INFO: update-demo-nautilus-sp5xk is verified up and running
STEP: using delete to clean up resources
Mar 9 09:18:34.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5846'
Mar 9 09:18:34.646: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 9 09:18:34.646: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar 9 09:18:34.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5846'
Mar 9 09:18:34.715: INFO: stderr: "No resources found in kubectl-5846 namespace.\n"
Mar 9 09:18:34.715: INFO: stdout: ""
Mar 9 09:18:34.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5846 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 9 09:18:34.801: INFO: stderr: ""
Mar 9 09:18:34.801: INFO: stdout: "update-demo-nautilus-sfcnt\nupdate-demo-nautilus-sp5xk\n"
Mar 9 09:18:35.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5846'
Mar 9 09:18:35.411: INFO: stderr: "No resources found in kubectl-5846 namespace.\n"
Mar 9 09:18:35.411: INFO: stdout: ""
Mar 9 09:18:35.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5846 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 9 09:18:35.488: INFO: stderr: ""
Mar 9 09:18:35.488: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:35.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5846" for this suite.
• [SLOW TEST:25.467 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Update Demo
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328
should scale a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":109,"skipped":1924,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs
should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:35.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464
STEP: creating an pod
Mar 9 09:18:35.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-317 -- logs-generator --log-lines-total 100 --run-duration 20s'
Mar 9 09:18:35.678: INFO: stderr: ""
Mar 9 09:18:35.678: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Mar 9 09:18:35.678: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Mar 9 09:18:35.678: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-317" to be "running and ready, or succeeded"
Mar 9 09:18:35.691: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.629228ms
Mar 9 09:18:37.694: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.016158192s
Mar 9 09:18:37.694: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Mar 9 09:18:37.694: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Mar 9 09:18:37.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317'
Mar 9 09:18:37.826: INFO: stderr: ""
Mar 9 09:18:37.826: INFO: stdout: "I0309 09:18:36.908412 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/9gh 556\nI0309 09:18:37.108671 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wwxs 590\nI0309 09:18:37.308580 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/tdm 534\nI0309 09:18:37.508685 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/vrc 361\nI0309 09:18:37.708583 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/8tgb 345\n"
STEP: limiting log lines
Mar 9 09:18:37.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --tail=1'
Mar 9 09:18:37.935: INFO: stderr: ""
Mar 9 09:18:37.936: INFO: stdout: "I0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\n"
Mar 9 09:18:37.936: INFO: got output "I0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\n"
STEP: limiting log bytes
Mar 9 09:18:37.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --limit-bytes=1'
Mar 9 09:18:38.014: INFO: stderr: ""
Mar 9 09:18:38.014: INFO: stdout: "I"
Mar 9 09:18:38.014: INFO: got output "I"
STEP: exposing timestamps
Mar 9 09:18:38.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --tail=1 --timestamps'
Mar 9 09:18:38.106: INFO: stderr: ""
Mar 9 09:18:38.106: INFO: stdout: "2020-03-09T09:18:37.908692024Z I0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\n"
Mar 9 09:18:38.106: INFO: got output "2020-03-09T09:18:37.908692024Z I0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\n"
STEP: restricting to a time range
Mar 9 09:18:40.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --since=1s'
Mar 9 09:18:40.774: INFO: stderr: ""
Mar 9 09:18:40.774: INFO: stdout: "I0309 09:18:39.908538 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/lz9k 549\nI0309 09:18:40.108630 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/z62j 506\nI0309 09:18:40.308594 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/g4bj 284\nI0309 09:18:40.508574 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/jtt 431\nI0309 09:18:40.708588 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/qnp5 357\n"
Mar 9 09:18:40.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-317 --since=24h'
Mar 9 09:18:40.858: INFO: stderr: ""
Mar 9 09:18:40.858: INFO: stdout: "I0309 09:18:36.908412 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/9gh 556\nI0309 09:18:37.108671 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/wwxs 590\nI0309 09:18:37.308580 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/tdm 534\nI0309 09:18:37.508685 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/vrc 361\nI0309 09:18:37.708583 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/8tgb 345\nI0309 09:18:37.908549 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/wq4m 503\nI0309 09:18:38.108540 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/mcw9 533\nI0309 09:18:38.308593 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/r4l 506\nI0309 09:18:38.508612 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/9gf 519\nI0309 09:18:38.708676 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/gfvl 295\nI0309 09:18:38.908602 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/95zp 426\nI0309 09:18:39.108639 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/glxh 469\nI0309 09:18:39.308649 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/tqv 500\nI0309 09:18:39.508598 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/r7q 219\nI0309 09:18:39.708659 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/clj 250\nI0309 09:18:39.908538 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/lz9k 549\nI0309 09:18:40.108630 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/z62j 506\nI0309 09:18:40.308594 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/g4bj 284\nI0309 09:18:40.508574 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/jtt 431\nI0309 09:18:40.708588 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/qnp5 357\n"
[AfterEach] Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470
Mar 9 09:18:40.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-317'
Mar 9 09:18:46.055: INFO: stderr: ""
Mar 9 09:18:46.055: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:46.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-317" for this suite.
• [SLOW TEST:10.574 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460
should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":110,"skipped":1940,"failed":0}
[sig-storage] EmptyDir volumes
should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:46.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 9 09:18:46.171: INFO: Waiting up to 5m0s for pod "pod-195071b0-3241-468a-b265-94478ce669a9" in namespace "emptydir-8176" to be "success or failure"
Mar 9 09:18:46.208: INFO: Pod "pod-195071b0-3241-468a-b265-94478ce669a9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.510659ms
Mar 9 09:18:48.212: INFO: Pod "pod-195071b0-3241-468a-b265-94478ce669a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040869976s
STEP: Saw pod success
Mar 9 09:18:48.212: INFO: Pod "pod-195071b0-3241-468a-b265-94478ce669a9" satisfied condition "success or failure"
Mar 9 09:18:48.215: INFO: Trying to get logs from node jerma-worker2 pod pod-195071b0-3241-468a-b265-94478ce669a9 container test-container:
STEP: delete the pod
Mar 9 09:18:48.267: INFO: Waiting for pod pod-195071b0-3241-468a-b265-94478ce669a9 to disappear
Mar 9 09:18:48.277: INFO: Pod pod-195071b0-3241-468a-b265-94478ce669a9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:18:48.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8176" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1940,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch
watch on custom resource definition objects [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:18:48.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:18:48.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR
Mar 9 09:18:48.966: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:48Z generation:1 name:name1 resourceVersion:266946 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:40ac3a32-8547-47d6-b4ca-9aca855af20f] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Mar 9 09:18:58.972: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:58Z generation:1 name:name2 resourceVersion:266995 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cc9c7227-3434-45f4-b2ea-64a05c41e23e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Mar 9 09:19:08.979: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:48Z generation:2 name:name1 resourceVersion:267025 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:40ac3a32-8547-47d6-b4ca-9aca855af20f] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Mar 9 09:19:18.986: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:58Z generation:2 name:name2 resourceVersion:267055 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cc9c7227-3434-45f4-b2ea-64a05c41e23e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Mar 9 09:19:28.993: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:48Z generation:2 name:name1 resourceVersion:267085 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:40ac3a32-8547-47d6-b4ca-9aca855af20f] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Mar 9 09:19:39.001: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-09T09:18:58Z generation:2 name:name2 resourceVersion:267115 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cc9c7227-3434-45f4-b2ea-64a05c41e23e] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:19:49.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2187" for this suite.
• [SLOW TEST:61.233 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
CustomResourceDefinition Watch
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
watch on custom resource definition objects [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":112,"skipped":1955,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment
deployment should support proportional scaling [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:19:49.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:19:49.598: INFO: Creating deployment "webserver-deployment"
Mar 9 09:19:49.619: INFO: Waiting for observed generation 1
Mar 9 09:19:51.676: INFO: Waiting for all required pods to come up
Mar 9 09:19:51.694: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Mar 9 09:19:55.704: INFO: Waiting for deployment "webserver-deployment" to complete
Mar 9 09:19:55.711: INFO: Updating deployment "webserver-deployment" with a non-existent image
Mar 9 09:19:55.719: INFO: Updating deployment webserver-deployment
Mar 9 09:19:55.719: INFO: Waiting for observed generation 2
Mar 9 09:19:57.790: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Mar 9 09:19:57.792: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Mar 9 09:19:57.795: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Mar 9 09:19:57.801: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Mar 9 09:19:57.801: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Mar 9 09:19:57.803: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Mar 9 09:19:57.807: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Mar 9 09:19:57.807: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Mar 9 09:19:57.820: INFO: Updating deployment webserver-deployment
Mar 9 09:19:57.820: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Mar 9 09:19:57.843: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Mar 9 09:19:57.903: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar 9 09:19:58.004: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment deployment-6460 /apis/apps/v1/namespaces/deployment-6460/deployments/webserver-deployment 7aaa05b7-bc27-458a-be56-84c8bfb3efea 267375 3 2020-03-09 09:19:49 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f0cc28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-09 09:19:56 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-09 09:19:57 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}
Mar 9 09:19:58.082: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6460 /apis/apps/v1/namespaces/deployment-6460/replicasets/webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 267426 3 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7aaa05b7-bc27-458a-be56-84c8bfb3efea 0xc000a175f7 0xc000a175f8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a176f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 9 09:19:58.082: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Mar 9 09:19:58.082: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6460 /apis/apps/v1/namespaces/deployment-6460/replicasets/webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 267425 3 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7aaa05b7-bc27-458a-be56-84c8bfb3efea 0xc000a174e7 0xc000a174e8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a17598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Mar 9 09:19:58.103: INFO: Pod "webserver-deployment-595b5b9587-2cb6d" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2cb6d webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-2cb6d 72d043d3-e1e5-40b8-9f58-42ee83d10048 267404 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c047 0xc00289c048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-2v94v" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2v94v webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-2v94v 76153c66-e3a5-47ec-9e9d-711a9ea5d038 267262 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c200 0xc00289c201}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.32,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://912c6e030ebbc3834764f237408509f8a43375215461be8f5f8f2ab7ae8a9a79,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-4flc7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4flc7 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-4flc7 d6fe87d5-0b8a-41c8-b7f2-760b7c006eaf 267274 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c4a0 0xc00289c4a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.23,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://98c3e3fb99f30a3e6ab290092709d2055bf22e5020b66aca34954d8baac4265e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-59x9c" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-59x9c webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-59x9c cddb708a-55db-4dba-bc84-97c477c1c6ca 267265 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c740 0xc00289c741}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.36,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c0812f58d022f61c1d912530640bd1270addc5c549bdecd299d39aa26f8f35bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-5jvmr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5jvmr webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-5jvmr b04bc41d-98e6-4d96-9d47-ac0b26b8008a 267392 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289c940 0xc00289c941}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-6kmrc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6kmrc webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-6kmrc d8984441-e119-4816-b622-911584b745cd 267259 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289cb90 0xc00289cb91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.33,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bc2f62d7a9f8f6c729947c3c49b6fa0cd9183bd6850fd9fbe92fbb799fc99ff7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.104: INFO: Pod "webserver-deployment-595b5b9587-8gfcx" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8gfcx webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-8gfcx 4f8b6336-4f16-47d6-a060-6cd733699b46 267417 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289cd70 0xc00289cd71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-cz7vm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cz7vm webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-cz7vm 5bb3ae2b-9ca5-4040-aa80-c6de5bd57a50 267420 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289ce80 0xc00289ce81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-d4blq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4blq webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-d4blq 3dd14a47-cc73-4ce5-9c50-cd1677da3106 267382 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289cf90 0xc00289cf91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-f4twt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-f4twt webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-f4twt 00de9839-fdc5-49aa-8c24-c7a777b2b830 267419 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d0a0 0xc00289d0a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-krrxk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-krrxk webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-krrxk bbcbb83c-e49c-4453-be1a-c251279360ff 267268 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d1b0 0xc00289d1b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.35,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://265f9816ae2f2e1c22913a1a5b7b59e9903c8ce2c597bd394fda022cf8dc4d09,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-l96m7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-l96m7 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-l96m7 25ed64f8-122f-4f21-b5a0-d825ef7a60e8 267248 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d320 0xc00289d321}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.22,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1224d121f21339681d0563132496845fd8feb81ca5b4cd6502e70ad7edc7957e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-qxd6w" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qxd6w webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-qxd6w 2ebebd8b-e80b-4eea-97bc-b7e073262b2c 267416 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d5c0 0xc00289d5c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.105: INFO: Pod "webserver-deployment-595b5b9587-rfzjv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rfzjv webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-rfzjv 4fc3c061-742b-4469-a878-44881f476636 267405 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d7b0 0xc00289d7b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-rmld8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rmld8 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-rmld8 5e6aac17-5ab2-461a-9149-9e4afac8c1e9 267406 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289d960 0xc00289d961}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-s4f9s" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-s4f9s webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-s4f9s d479e7e8-9bdb-44a5-9756-ea46b9fe4494 267418 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289db40 0xc00289db41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-sr667" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sr667 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-sr667 7109f57c-d197-4791-afad-fd9eafc912ad 267440 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289dcf0 0xc00289dcf1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-09 09:19:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-w4rcm" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w4rcm webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-w4rcm 23f1f8bd-e0e0-4743-bd12-83014cb6c2bd 267277 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc00289df90 0xc00289df91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.26,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a474733f2dd7cfce693522ea294b2bfa4535109d74bfe35d46ec5ebbf337b67,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-x69n7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x69n7 webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-x69n7 1cc99b4a-7a6a-497c-820b-b998457268a5 267280 0 2020-03-09 09:19:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc000a1a190 0xc000a1a191}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.25,StartTime:2020-03-09 09:19:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-09 09:19:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://06f488455f731d7406d0c2b78554a34bf198e05a2917a3105159ab32c548ae9a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.106: INFO: Pod "webserver-deployment-595b5b9587-zxz9f" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zxz9f webserver-deployment-595b5b9587- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-595b5b9587-zxz9f fa48b456-f87e-4019-b9a5-058b93a91e65 267430 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 dce6a453-4eeb-43d9-af25-8925fc03c6df 0xc000a1a310 0xc000a1a311}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-09 09:19:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-46jxk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-46jxk webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-46jxk 7c3eb508-1845-4d88-a118-952d91c0f6df 267431 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1a460 0xc000a1a461}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-4psfk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4psfk webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-4psfk 8cd54c9b-8f64-40c9-b650-02df4768cb7b 267423 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1a580 0xc000a1a581}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-5fgfq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5fgfq webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-5fgfq a8f6ed64-cfb9-4049-b335-1d64b2c2ba39 267422 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1a6b0 0xc000a1a6b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-7f2rv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7f2rv webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-7f2rv ff8afdd6-2861-4cdf-ac12-858edf350d7c 267421 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1a7d0 0xc000a1a7d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-9tgqc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9tgqc webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-9tgqc a1d22c9d-6a81-44b8-9bd5-72ca4c95187e 267341 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1ab20 0xc000a1ab21}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-09 09:19:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-b7p6q" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7p6q webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-b7p6q e5d39845-3e38-49f5-b39a-5050388c70c3 267344 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1acb0 0xc000a1acb1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-09 09:19:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-hxpwk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hxpwk webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-hxpwk 1b8ff28c-3599-41dc-b144-53503e5337b7 267380 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1ae40 0xc000a1ae41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-j6w69" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j6w69 webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-j6w69 5c607f29-9274-4520-bd2d-7f334e391826 267403 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1af70 0xc000a1af71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.107: INFO: Pod "webserver-deployment-c7997dcc8-sgfzp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sgfzp webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-sgfzp dd1cbde6-52b8-4310-9641-928dfc6e6556 267424 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b0c0 0xc000a1b0c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.108: INFO: Pod "webserver-deployment-c7997dcc8-tv2kh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tv2kh webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-tv2kh ccb3dafb-0b37-4f16-bcb6-b817b9292dac 267384 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b1e0 0xc000a1b1e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.27,StartTime:2020-03-09 09:19:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.108: INFO: Pod "webserver-deployment-c7997dcc8-xvssz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xvssz webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-xvssz 2e56321b-24b4-4831-8f69-5c11813512f6 267342 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b380 0xc000a1b381}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-09 09:19:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.108: INFO: Pod "webserver-deployment-c7997dcc8-xx6kg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xx6kg webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-xx6kg b008796e-672d-4276-95c4-8dee5da70ac5 267390 0 2020-03-09 09:19:57 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b4f0 0xc000a1b4f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 9 09:19:58.108: INFO: Pod "webserver-deployment-c7997dcc8-zj2z8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zj2z8 webserver-deployment-c7997dcc8- deployment-6460 /api/v1/namespaces/deployment-6460/pods/webserver-deployment-c7997dcc8-zj2z8 e9936b39-87f6-4002-ae6b-bb0d9bf06276 267346 0 2020-03-09 09:19:55 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 41f7c774-64cc-4db5-8484-e6fc32096452 0xc000a1b630 0xc000a1b631}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6d5bh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6d5bh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6d5bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-09 09:19:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-09 09:19:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:19:58.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6460" for this suite.
• [SLOW TEST:8.737 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
deployment should support proportional scaling [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":113,"skipped":1970,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial]
validates resource limits of pods that are allowed to run [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:19:58.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar 9 09:19:58.476: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 9 09:19:58.593: INFO: Waiting for terminating namespaces to be deleted...
Mar 9 09:19:58.599: INFO:
Logging pods the kubelet thinks is on node jerma-worker before test
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-5jvmr from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-f4twt from deployment-6460 started at (0 container statuses recorded)
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-4flc7 from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: true, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-xvssz from deployment-6460 started at 2020-03-09 09:19:55 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-hxpwk from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-rfzjv from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-qxd6w from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-5fgfq from deployment-6460 started at (0 container statuses recorded)
Mar 9 09:19:58.850: INFO: kube-proxy-dvgp7 from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container kube-proxy ready: true, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-w4rcm from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: true, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-x69n7 from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: true, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-tv2kh from deployment-6460 started at 2020-03-09 09:19:55 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-j6w69 from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:58.850: INFO: kindnet-gxwrl from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container kindnet-cni ready: true, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-l96m7 from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: true, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-zxz9f from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:58.850: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:58.850: INFO: webserver-deployment-595b5b9587-s4f9s from deployment-6460 started at (0 container statuses recorded)
Mar 9 09:19:58.850: INFO: webserver-deployment-c7997dcc8-sgfzp from deployment-6460 started at (0 container statuses recorded)
Mar 9 09:19:58.850: INFO:
Logging pods the kubelet thinks is on node jerma-worker2 before test
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-cz7vm from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: kindnet-x9bds from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container kindnet-cni ready: true, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-2v94v from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: true, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-xx6kg from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-59x9c from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: true, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-b7p6q from deployment-6460 started at 2020-03-09 09:19:55 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-2cb6d from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-8gfcx from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-7f2rv from deployment-6460 started at (0 container statuses recorded)
Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-9tgqc from deployment-6460 started at 2020-03-09 09:19:55 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-d4blq from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-46jxk from deployment-6460 started at (0 container statuses recorded)
Mar 9 09:19:59.148: INFO: kube-proxy-xqsww from kube-system started at 2020-03-08 14:48:16 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container kube-proxy ready: true, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-6kmrc from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: true, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-rmld8 from deployment-6460 started at 2020-03-09 09:19:58 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-4psfk from deployment-6460 started at (0 container statuses recorded)
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-krrxk from deployment-6460 started at 2020-03-09 09:19:49 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: true, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-c7997dcc8-zj2z8 from deployment-6460 started at 2020-03-09 09:19:56 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
Mar 9 09:19:59.148: INFO: webserver-deployment-595b5b9587-sr667 from deployment-6460 started at 2020-03-09 09:19:57 +0000 UTC (1 container statuses recorded)
Mar 9 09:19:59.148: INFO: Container httpd ready: false, restart count 0
[It] validates resource limits of pods that are allowed to run [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-2cb6d requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-2v94v requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-4flc7 requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-59x9c requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-5jvmr requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-6kmrc requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-8gfcx requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-cz7vm requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-d4blq requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-f4twt requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-krrxk requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-l96m7 requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-qxd6w requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-rfzjv requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-rmld8 requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-s4f9s requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-sr667 requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-w4rcm requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-x69n7 requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-595b5b9587-zxz9f requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.293: INFO: Pod webserver-deployment-c7997dcc8-46jxk requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-4psfk requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-5fgfq requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-7f2rv requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-9tgqc requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-b7p6q requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-hxpwk requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-j6w69 requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-sgfzp requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-tv2kh requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-xvssz requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-xx6kg requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.294: INFO: Pod webserver-deployment-c7997dcc8-zj2z8 requesting resource cpu=0m on Node jerma-worker2
Mar 9 09:19:59.294: INFO: Pod kindnet-gxwrl requesting resource cpu=100m on Node jerma-worker
Mar 9 09:19:59.294: INFO: Pod kindnet-x9bds requesting resource cpu=100m on Node jerma-worker2
Mar 9 09:19:59.294: INFO: Pod kube-proxy-dvgp7 requesting resource cpu=0m on Node jerma-worker
Mar 9 09:19:59.294: INFO: Pod kube-proxy-xqsww requesting resource cpu=0m on Node jerma-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Mar 9 09:19:59.294: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Mar 9 09:19:59.298: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event:
Type = [Normal], Name = [filler-pod-20f200ef-5ef9-4028-b994-19989823391a.15fa983b47df5725], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4876/filler-pod-20f200ef-5ef9-4028-b994-19989823391a to jerma-worker2]
STEP: Considering event:
Type = [Normal], Name = [filler-pod-20f200ef-5ef9-4028-b994-19989823391a.15fa983bde012a4e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event:
Type = [Normal], Name = [filler-pod-20f200ef-5ef9-4028-b994-19989823391a.15fa983bfbf1bc1f], Reason = [Created], Message = [Created container filler-pod-20f200ef-5ef9-4028-b994-19989823391a]
STEP: Considering event:
Type = [Normal], Name = [filler-pod-20f200ef-5ef9-4028-b994-19989823391a.15fa983c0a00eee5], Reason = [Started], Message = [Started container filler-pod-20f200ef-5ef9-4028-b994-19989823391a]
STEP: Considering event:
Type = [Normal], Name = [filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791.15fa983b458fbac3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4876/filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791 to jerma-worker]
STEP: Considering event:
Type = [Normal], Name = [filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791.15fa983be8f5ef53], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event:
Type = [Normal], Name = [filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791.15fa983c03009543], Reason = [Created], Message = [Created container filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791]
STEP: Considering event:
Type = [Normal], Name = [filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791.15fa983c10ce6b12], Reason = [Started], Message = [Started container filler-pod-bdccb208-18fe-4580-a389-e3d1f9f7a791]
STEP: Considering event:
Type = [Warning], Name = [additional-pod.15fa983cb1a5f51a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event:
Type = [Warning], Name = [additional-pod.15fa983cb66d6660], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:20:06.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4876" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
• [SLOW TEST:8.454 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
validates resource limits of pods that are allowed to run [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":114,"skipped":1974,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
should include custom resource definition resources in discovery documents [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:20:06.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:20:06.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7677" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":115,"skipped":2008,"failed":0}
SSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged
should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:20:06.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:20:07.062: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1" in namespace "security-context-test-3349" to be "success or failure"
Mar 9 09:20:07.147: INFO: Pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1": Phase="Pending", Reason="", readiness=false. Elapsed: 84.277682ms
Mar 9 09:20:09.161: INFO: Pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098540944s
Mar 9 09:20:11.170: INFO: Pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10795929s
Mar 9 09:20:11.170: INFO: Pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1" satisfied condition "success or failure"
Mar 9 09:20:11.177: INFO: Got logs for pod "busybox-privileged-false-9661c0b0-02bf-47a5-afc1-d10f04ed4af1": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:20:11.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3349" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":2011,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services
should be able to change the type from ClusterIP to ExternalName [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:20:11.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9753
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-9753
STEP: creating replication controller externalsvc in namespace services-9753
I0309 09:20:11.409673 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9753, replica count: 2
I0309 09:20:14.460061 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady
STEP: changing the ClusterIP service to type=ExternalName
Mar 9 09:20:14.509: INFO: Creating new exec pod
Mar 9 09:20:16.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9753 execpodxb64w -- /bin/sh -x -c nslookup clusterip-service'
Mar 9 09:20:16.712: INFO: stderr: "I0309 09:20:16.632058 1402 log.go:172] (0xc000ab5c30) (0xc00095caa0) Create stream\nI0309 09:20:16.632091 1402 log.go:172] (0xc000ab5c30) (0xc00095caa0) Stream added, broadcasting: 1\nI0309 09:20:16.636589 1402 log.go:172] (0xc000ab5c30) Reply frame received for 1\nI0309 09:20:16.636642 1402 log.go:172] (0xc000ab5c30) (0xc0006ba780) Create stream\nI0309 09:20:16.636660 1402 log.go:172] (0xc000ab5c30) (0xc0006ba780) Stream added, broadcasting: 3\nI0309 09:20:16.637454 1402 log.go:172] (0xc000ab5c30) Reply frame received for 3\nI0309 09:20:16.637479 1402 log.go:172] (0xc000ab5c30) (0xc000529540) Create stream\nI0309 09:20:16.637490 1402 log.go:172] (0xc000ab5c30) (0xc000529540) Stream added, broadcasting: 5\nI0309 09:20:16.638261 1402 log.go:172] (0xc000ab5c30) Reply frame received for 5\nI0309 09:20:16.700576 1402 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0309 09:20:16.700599 1402 log.go:172] (0xc000529540) (5) Data frame handling\nI0309 09:20:16.700612 1402 log.go:172] (0xc000529540) (5) Data frame sent\n+ nslookup clusterip-service\nI0309 09:20:16.705186 1402 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0309 09:20:16.705201 1402 log.go:172] (0xc0006ba780) (3) Data frame handling\nI0309 09:20:16.705213 1402 log.go:172] (0xc0006ba780) (3) Data frame sent\nI0309 09:20:16.706456 1402 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0309 09:20:16.706483 1402 log.go:172] (0xc0006ba780) (3) Data frame handling\nI0309 09:20:16.706498 1402 log.go:172] (0xc0006ba780) (3) Data frame sent\nI0309 09:20:16.706570 1402 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0309 09:20:16.706596 1402 log.go:172] (0xc000529540) (5) Data frame handling\nI0309 09:20:16.706712 1402 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0309 09:20:16.706730 1402 log.go:172] (0xc0006ba780) (3) Data frame handling\nI0309 09:20:16.708714 1402 log.go:172] (0xc000ab5c30) Data frame received for 1\nI0309 09:20:16.708737 1402 log.go:172] (0xc00095caa0) (1) Data frame handling\nI0309 09:20:16.708745 1402 log.go:172] (0xc00095caa0) (1) Data frame sent\nI0309 09:20:16.708755 1402 log.go:172] (0xc000ab5c30) (0xc00095caa0) Stream removed, broadcasting: 1\nI0309 09:20:16.708772 1402 log.go:172] (0xc000ab5c30) Go away received\nI0309 09:20:16.709010 1402 log.go:172] (0xc000ab5c30) (0xc00095caa0) Stream removed, broadcasting: 1\nI0309 09:20:16.709029 1402 log.go:172] (0xc000ab5c30) (0xc0006ba780) Stream removed, broadcasting: 3\nI0309 09:20:16.709038 1402 log.go:172] (0xc000ab5c30) (0xc000529540) Stream removed, broadcasting: 5\n"
Mar 9 09:20:16.712: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9753.svc.cluster.local\tcanonical name = externalsvc.services-9753.svc.cluster.local.\nName:\texternalsvc.services-9753.svc.cluster.local\nAddress: 10.109.106.175\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-9753, will wait for the garbage collector to delete the pods
Mar 9 09:20:16.803: INFO: Deleting ReplicationController externalsvc took: 13.678505ms
Mar 9 09:20:17.103: INFO: Terminating ReplicationController externalsvc pods took: 300.26883ms
Mar 9 09:20:26.136: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:20:26.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9753" for this suite.
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
• [SLOW TEST:14.997 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should be able to change the type from ClusterIP to ExternalName [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":117,"skipped":2023,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
custom resource defaulting for requests and from storage works [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:20:26.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:20:26.258: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:20:27.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6819" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":118,"skipped":2041,"failed":0}
------------------------------
[sig-network] Services
should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:20:27.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-9297
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[]
Mar 9 09:20:27.576: INFO: Get endpoints failed (3.019744ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Mar 9 09:20:28.579: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[] (1.006179649s elapsed)
STEP: Creating pod pod1 in namespace services-9297
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[pod1:[80]]
Mar 9 09:20:30.658: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[pod1:[80]] (2.073513625s elapsed)
STEP: Creating pod pod2 in namespace services-9297
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[pod1:[80] pod2:[80]]
Mar 9 09:20:32.763: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[pod1:[80] pod2:[80]] (2.101267451s elapsed)
STEP: Deleting pod pod1 in namespace services-9297
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[pod2:[80]]
Mar 9 09:20:32.831: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[pod2:[80]] (53.409585ms elapsed)
STEP: Deleting pod pod2 in namespace services-9297
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9297 to expose endpoints map[]
Mar 9 09:20:33.840: INFO: successfully validated that service endpoint-test2 in namespace services-9297 exposes endpoints map[] (1.005834862s elapsed)
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 9 09:20:33.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9297" for this suite.
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
• [SLOW TEST:6.400 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":119,"skipped":2041,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
works for CRD preserving unknown fields in an embedded object [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 9 09:20:33.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 9 09:20:33.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 9 09:20:36.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2233 create -f -'
Mar 9 09:20:38.778: INFO: stderr: ""
Mar 9 09:20:38.778: INFO: stdout: "e2e-test-crd-publish-openapi-7890-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Mar 9 09:20:38.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2233 delete e2e-test-crd-publish-openapi-7890-crds test-cr'
Mar 9 09:20:38.894: INFO: stderr: ""
Mar 9 09:20:38.894: INFO: stdout: "e2e-test-crd-publish-openapi-7890-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Mar 9 09:20:38.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2233 apply -f -'
Mar 9 09:20:39.209: INFO: stderr: ""
Mar 9 09:20:39.209: INFO: stdout: "e2e-test-crd-publish-openapi-7890-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Mar 9 09:20:39.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2233 delete e2e-test-crd-publish-openapi-7890-crds test-cr'
Mar 9 09:20:39.330: INFO: stderr: ""
Mar 9 09:20:39.331: INFO: stdout: "e2e-test-crd-publish-openapi-7890-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Mar 9 09:20:39.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7890-crds'
Mar 9 09:20:39.561: INFO: stderr: ""
Mar 9 09:20:39.561: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7890-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t