I0526 23:38:27.214493 8 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0526 23:38:27.214683 8 e2e.go:129] Starting e2e run "bec3cffa-fb7e-4fa0-a839-6c7575821954" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590536306 - Will randomize all specs Will run 288 of 5095 specs May 26 23:38:27.267: INFO: >>> kubeConfig: /root/.kube/config May 26 23:38:27.269: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 26 23:38:27.290: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 26 23:38:27.325: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 26 23:38:27.325: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 26 23:38:27.325: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 26 23:38:27.335: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 26 23:38:27.335: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 26 23:38:27.335: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 26 23:38:27.336: INFO: kube-apiserver version: v1.18.2 May 26 23:38:27.336: INFO: >>> kubeConfig: /root/.kube/config May 26 23:38:27.343: INFO: Cluster IP family: ipv4 SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:38:27.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook May 26 23:38:27.400: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 23:38:28.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-75dd644756\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} May 26 23:38:30.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 23:38:32.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133108, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 23:38:35.516: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:38:48.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6177" for this suite. STEP: Destroying namespace "webhook-6177-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.921 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:38:48.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 23:38:48.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45" in namespace "projected-5795" to be "Succeeded or Failed" May 26 23:38:48.425: INFO: Pod "downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45": Phase="Pending", Reason="", readiness=false. Elapsed: 7.855396ms May 26 23:38:50.451: INFO: Pod "downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033697578s May 26 23:38:52.455: INFO: Pod "downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45": Phase="Running", Reason="", readiness=true. Elapsed: 4.038088584s May 26 23:38:54.460: INFO: Pod "downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043059186s STEP: Saw pod success May 26 23:38:54.461: INFO: Pod "downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45" satisfied condition "Succeeded or Failed" May 26 23:38:54.464: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45 container client-container: STEP: delete the pod May 26 23:38:54.570: INFO: Waiting for pod downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45 to disappear May 26 23:38:54.575: INFO: Pod downwardapi-volume-4df3f05e-154e-445f-be95-bc7c1b86aa45 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:38:54.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5795" for this suite. • [SLOW TEST:6.317 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:38:54.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 26 23:39:01.168: INFO: Successfully updated pod "adopt-release-kjdzz" STEP: Checking that the Job readopts the Pod May 26 23:39:01.168: INFO: Waiting up to 15m0s for pod "adopt-release-kjdzz" in namespace "job-3998" to be "adopted" May 26 23:39:01.192: INFO: Pod "adopt-release-kjdzz": Phase="Running", Reason="", readiness=true. Elapsed: 23.530318ms May 26 23:39:03.205: INFO: Pod "adopt-release-kjdzz": Phase="Running", Reason="", readiness=true. Elapsed: 2.037012979s May 26 23:39:03.205: INFO: Pod "adopt-release-kjdzz" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 26 23:39:03.714: INFO: Successfully updated pod "adopt-release-kjdzz" STEP: Checking that the Job releases the Pod May 26 23:39:03.714: INFO: Waiting up to 15m0s for pod "adopt-release-kjdzz" in namespace "job-3998" to be "released" May 26 23:39:03.727: INFO: Pod "adopt-release-kjdzz": Phase="Running", Reason="", readiness=true. Elapsed: 12.760112ms May 26 23:39:05.731: INFO: Pod "adopt-release-kjdzz": Phase="Running", Reason="", readiness=true. Elapsed: 2.016983623s May 26 23:39:05.731: INFO: Pod "adopt-release-kjdzz" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:39:05.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3998" for this suite. • [SLOW TEST:11.173 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":3,"skipped":112,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:39:05.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 26 23:39:13.746: INFO: 0 pods remaining May 26 23:39:13.746: INFO: 0 pods has nil DeletionTimestamp May 26 23:39:13.746: INFO: May 26 23:39:14.969: INFO: 0 pods remaining May 26 23:39:14.969: INFO: 0 pods has nil DeletionTimestamp May 26 23:39:14.969: INFO: STEP: Gathering metrics W0526 23:39:16.710193 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 23:39:16.710: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:39:16.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3718" for this suite. • [SLOW TEST:11.001 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":4,"skipped":113,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:39:16.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-c48ffb36-a7d5-4057-b259-728d3b0176c2 STEP: Creating a pod to test consume configMaps May 26 23:39:19.171: INFO: Waiting up to 5m0s for pod "pod-configmaps-e2c9c5ee-0a69-49c5-a3d0-1bf73d6a17fa" in namespace "configmap-3775" to be "Succeeded or Failed" May 26 23:39:19.209: INFO: Pod "pod-configmaps-e2c9c5ee-0a69-49c5-a3d0-1bf73d6a17fa": Phase="Pending", Reason="", readiness=false. Elapsed: 38.288158ms May 26 23:39:21.230: INFO: Pod "pod-configmaps-e2c9c5ee-0a69-49c5-a3d0-1bf73d6a17fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059510787s May 26 23:39:23.234: INFO: Pod "pod-configmaps-e2c9c5ee-0a69-49c5-a3d0-1bf73d6a17fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063043306s STEP: Saw pod success May 26 23:39:23.234: INFO: Pod "pod-configmaps-e2c9c5ee-0a69-49c5-a3d0-1bf73d6a17fa" satisfied condition "Succeeded or Failed" May 26 23:39:23.237: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e2c9c5ee-0a69-49c5-a3d0-1bf73d6a17fa container configmap-volume-test: STEP: delete the pod May 26 23:39:23.284: INFO: Waiting for pod pod-configmaps-e2c9c5ee-0a69-49c5-a3d0-1bf73d6a17fa to disappear May 26 23:39:23.492: INFO: Pod pod-configmaps-e2c9c5ee-0a69-49c5-a3d0-1bf73d6a17fa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:39:23.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3775" for this suite. • [SLOW TEST:6.742 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":5,"skipped":113,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:39:23.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 23:39:23.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8318' May 26 23:39:28.056: INFO: stderr: "" May 26 23:39:28.056: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 26 23:39:28.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8318' May 26 23:39:29.904: INFO: stderr: "" May 26 23:39:29.904: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 26 23:39:30.909: INFO: Selector matched 1 pods for map[app:agnhost] May 26 23:39:30.909: INFO: Found 0 / 1 May 26 23:39:31.919: INFO: Selector matched 1 pods for map[app:agnhost] May 26 23:39:31.919: INFO: Found 0 / 1 May 26 23:39:32.910: INFO: Selector matched 1 pods for map[app:agnhost] May 26 23:39:32.910: INFO: Found 1 / 1 May 26 23:39:32.910: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 26 23:39:32.914: INFO: Selector matched 1 pods for map[app:agnhost] May 26 23:39:32.914: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 23:39:32.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-47dsb --namespace=kubectl-8318' May 26 23:39:33.078: INFO: stderr: "" May 26 23:39:33.078: INFO: stdout: "Name: agnhost-master-47dsb\nNamespace: kubectl-8318\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Tue, 26 May 2020 23:39:28 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.49\nIPs:\n IP: 10.244.2.49\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://ad78d2c776239b518734cab34564b5beac0d84dd1b1c6dfb2063336a4f39a69b\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 26 May 2020 23:39:31 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-7qs6x (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-7qs6x:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-7qs6x\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-8318/agnhost-master-47dsb to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-master\n" May 26 23:39:33.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8318' May 26 23:39:33.203: INFO: stderr: "" May 26 23:39:33.203: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8318\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-47dsb\n" May 26 23:39:33.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8318' May 26 23:39:33.321: INFO: stderr: "" May 26 23:39:33.321: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8318\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.108.83.53\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.49:6379\nSession Affinity: None\nEvents: \n" May 26 23:39:33.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 26 23:39:33.469: INFO: stderr: "" May 26 23:39:33.470: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 26 May 2020 23:39:30 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 26 May 2020 23:36:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 26 May 2020 23:36:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 26 May 2020 23:36:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 26 May 2020 23:36:25 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 27d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 27d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 27d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 27d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 27d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 27d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 26 23:39:33.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-8318' May 26 23:39:33.570: INFO: stderr: "" May 26 23:39:33.570: INFO: stdout: "Name: kubectl-8318\nLabels: e2e-framework=kubectl\n e2e-run=bec3cffa-fb7e-4fa0-a839-6c7575821954\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:39:33.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8318" for this suite. • [SLOW TEST:10.077 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":6,"skipped":118,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:39:33.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 23:39:34.103: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 23:39:36.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133174, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133174, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133174, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133174, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 23:39:39.162: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:39:39.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4152" for this suite. STEP: Destroying namespace "webhook-4152-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.286 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":7,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:39:39.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 23:39:40.089: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1636 I0526 23:39:40.353367 8 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1636, replica count: 1 I0526 23:39:41.403768 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 23:39:42.403993 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 23:39:43.404265 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 23:39:44.404499 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 23:39:45.404720 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 23:39:45.690: INFO: Created: latency-svc-vb8sl May 26 23:39:45.719: INFO: Got endpoints: latency-svc-vb8sl [214.588949ms] May 26 23:39:45.939: INFO: Created: latency-svc-dmbzm May 26 23:39:46.061: INFO: Got endpoints: latency-svc-dmbzm [342.187957ms] May 26 23:39:46.097: INFO: Created: latency-svc-n44j7 May 26 23:39:46.126: INFO: Got endpoints: latency-svc-n44j7 [406.494238ms] May 26 23:39:46.199: INFO: Created: latency-svc-nnvm8 May 26 23:39:46.240: INFO: Got endpoints: latency-svc-nnvm8 [520.627377ms] May 26 23:39:46.278: INFO: Created: latency-svc-qzjhr May 26 23:39:46.331: INFO: Got endpoints: latency-svc-qzjhr [610.76447ms] May 26 23:39:46.389: INFO: Created: latency-svc-6dpn4 May 26 23:39:46.409: INFO: Got endpoints: latency-svc-6dpn4 [689.88582ms] May 26 23:39:46.488: INFO: Created: latency-svc-6zd5p May 26 23:39:46.499: INFO: Got endpoints: latency-svc-6zd5p [779.36384ms] May 26 23:39:46.571: INFO: Created: latency-svc-r8hhd May 26 23:39:46.618: INFO: Got endpoints: latency-svc-r8hhd [898.34725ms] May 26 23:39:46.652: INFO: Created: latency-svc-2pxzz May 26 23:39:46.667: INFO: Got endpoints: latency-svc-2pxzz [947.189532ms] May 26 23:39:46.768: INFO: Created: latency-svc-4wgff May 26 23:39:46.773: INFO: Got endpoints: latency-svc-4wgff [1.053362651s] May 26 23:39:46.802: INFO: Created: latency-svc-qw25j May 26 23:39:46.811: INFO: Got endpoints: latency-svc-qw25j [1.091455599s] May 26 23:39:46.912: INFO: Created: latency-svc-m5h5r May 26 23:39:46.955: INFO: Created: latency-svc-bnhnm May 26 23:39:46.955: INFO: Got endpoints: latency-svc-m5h5r [1.236204154s] May 26 23:39:46.975: INFO: Got endpoints: latency-svc-bnhnm [1.25551785s] May 26 23:39:47.084: INFO: Created: latency-svc-58z9t May 26 23:39:47.103: INFO: Got endpoints: latency-svc-58z9t [1.38317818s] May 26 23:39:47.144: INFO: Created: latency-svc-hxt4p May 26 23:39:47.157: INFO: Got endpoints: latency-svc-hxt4p [1.437533295s] May 26 23:39:47.210: INFO: Created: latency-svc-bsrc9 May 26 23:39:47.216: INFO: Got endpoints: latency-svc-bsrc9 [1.496680153s] May 26 23:39:47.295: INFO: Created: latency-svc-csfnp May 26 23:39:47.306: INFO: Got endpoints: latency-svc-csfnp [1.244874802s] May 26 23:39:47.393: INFO: Created: latency-svc-b4vtw May 26 23:39:47.430: INFO: Got endpoints: latency-svc-b4vtw [1.303839442s] May 26 23:39:47.522: INFO: Created: latency-svc-znj85 May 26 23:39:47.528: INFO: Got endpoints: latency-svc-znj85 [1.288039327s] May 26 23:39:47.579: INFO: Created: latency-svc-9rbs2 May 26 23:39:47.609: INFO: Got endpoints: latency-svc-9rbs2 [1.278500847s] May 26 23:39:47.688: INFO: Created: latency-svc-kddsg May 26 23:39:47.706: INFO: Got endpoints: latency-svc-kddsg [1.296284565s] May 26 23:39:47.740: INFO: Created: latency-svc-4vp6k May 26 23:39:47.758: INFO: Got endpoints: latency-svc-4vp6k [1.259408261s] May 26 23:39:47.846: INFO: Created: latency-svc-4nn46 May 26 23:39:47.856: INFO: Got endpoints: latency-svc-4nn46 [1.238498646s] May 26 23:39:47.879: INFO: Created: latency-svc-lnfpv May 26 23:39:47.897: INFO: Got endpoints: latency-svc-lnfpv [1.229774784s] May 26 23:39:47.989: INFO: Created: latency-svc-dljrs May 26 23:39:48.002: INFO: Got endpoints: latency-svc-dljrs [1.228879029s] May 26 23:39:48.038: INFO: Created: latency-svc-n4gg4 May 26 23:39:48.047: INFO: Got endpoints: latency-svc-n4gg4 [1.236259666s] May 26 23:39:48.081: INFO: Created: latency-svc-62knn May 26 23:39:48.121: INFO: Got endpoints: latency-svc-62knn [1.165346806s] May 26 23:39:48.143: INFO: Created: latency-svc-wjmwb May 26 23:39:48.161: INFO: Got endpoints: latency-svc-wjmwb [1.185858593s] May 26 23:39:48.253: INFO: Created: latency-svc-jkkck May 26 23:39:48.266: INFO: Got endpoints: latency-svc-jkkck [1.16328776s] May 26 23:39:48.303: INFO: Created: latency-svc-w8wgb May 26 23:39:48.330: INFO: Got endpoints: latency-svc-w8wgb [1.173343999s] May 26 23:39:48.396: INFO: Created: latency-svc-8vjtf May 26 23:39:48.402: INFO: Got endpoints: latency-svc-8vjtf [1.185275518s] May 26 23:39:48.443: INFO: Created: latency-svc-mh2vv May 26 23:39:48.474: INFO: Got endpoints: latency-svc-mh2vv [1.167743223s] May 26 23:39:48.559: INFO: Created: latency-svc-lbjdx May 26 23:39:48.562: INFO: Got endpoints: latency-svc-lbjdx [1.131614233s] May 26 23:39:48.617: INFO: Created: latency-svc-wcqm8 May 26 23:39:48.641: INFO: Got endpoints: latency-svc-wcqm8 [1.112510872s] May 26 23:39:48.710: INFO: Created: latency-svc-fw7b6 May 26 23:39:48.740: INFO: Got endpoints: latency-svc-fw7b6 [1.130390053s] May 26 23:39:48.764: INFO: Created: latency-svc-p8256 May 26 23:39:48.797: INFO: Got endpoints: latency-svc-p8256 [1.091006863s] May 26 23:39:48.930: INFO: Created: latency-svc-rnj8l May 26 23:39:48.940: INFO: Got endpoints: latency-svc-rnj8l [1.181574656s] May 26 23:39:49.025: INFO: Created: latency-svc-ggl6q May 26 23:39:49.139: INFO: Got endpoints: latency-svc-ggl6q [1.283007969s] May 26 23:39:49.184: INFO: Created: latency-svc-npvn5 May 26 23:39:49.217: INFO: Got endpoints: latency-svc-npvn5 [1.320265614s] May 26 23:39:49.307: INFO: Created: latency-svc-8m6gf May 26 23:39:49.311: INFO: Got endpoints: latency-svc-8m6gf [1.309037557s] May 26 23:39:49.347: INFO: Created: latency-svc-2xf7d May 26 23:39:49.365: INFO: Got endpoints: latency-svc-2xf7d [1.317163386s] May 26 23:39:49.498: INFO: Created: latency-svc-krvt7 May 26 23:39:49.529: INFO: Got endpoints: latency-svc-krvt7 [1.408248833s] May 26 23:39:49.577: INFO: Created: latency-svc-q7g8c May 26 23:39:49.636: INFO: Got endpoints: latency-svc-q7g8c [1.474425111s] May 26 23:39:49.646: INFO: Created: latency-svc-5xq64 May 26 23:39:49.682: INFO: Got endpoints: latency-svc-5xq64 [1.415645046s] May 26 23:39:49.714: INFO: Created: latency-svc-xc667 May 26 23:39:49.726: INFO: Got endpoints: latency-svc-xc667 [1.395491257s] May 26 23:39:49.798: INFO: Created: latency-svc-w8c47 May 26 23:39:49.820: INFO: Created: latency-svc-696bs May 26 23:39:49.820: INFO: Got endpoints: latency-svc-w8c47 [1.418643133s] May 26 23:39:49.868: INFO: Got endpoints: latency-svc-696bs [1.393703408s] May 26 23:39:49.967: INFO: Created: latency-svc-496gq May 26 23:39:49.997: INFO: Got endpoints: latency-svc-496gq [1.435211482s] May 26 23:39:50.060: INFO: Created: latency-svc-mh7sw May 26 23:39:52.093: INFO: Got endpoints: latency-svc-mh7sw [3.451625204s] May 26 23:39:52.126: INFO: Created: latency-svc-2wxkf May 26 23:39:52.151: INFO: Got endpoints: latency-svc-2wxkf [3.411456275s] May 26 23:39:52.313: INFO: Created: latency-svc-ddkhz May 26 23:39:52.352: INFO: Got endpoints: latency-svc-ddkhz [3.555051616s] May 26 23:39:52.474: INFO: Created: latency-svc-lthnf May 26 23:39:52.491: INFO: Got endpoints: latency-svc-lthnf [3.550864508s] May 26 23:39:52.516: INFO: Created: latency-svc-wz6xm May 26 23:39:52.594: INFO: Got endpoints: latency-svc-wz6xm [3.454970121s] May 26 23:39:52.615: INFO: Created: latency-svc-7ngcw May 26 23:39:52.628: INFO: Got endpoints: latency-svc-7ngcw [3.411269468s] May 26 23:39:52.652: INFO: Created: latency-svc-lxj75 May 26 23:39:52.666: INFO: Got endpoints: latency-svc-lxj75 [3.354479523s] May 26 23:39:52.774: INFO: Created: latency-svc-j4nh9 May 26 23:39:52.780: INFO: Got endpoints: latency-svc-j4nh9 [3.415096921s] May 26 23:39:52.833: INFO: Created: latency-svc-vvnqn May 26 23:39:52.846: INFO: Got endpoints: latency-svc-vvnqn [3.316792442s] May 26 23:39:52.867: INFO: Created: latency-svc-6xq9m May 26 23:39:52.924: INFO: Got endpoints: latency-svc-6xq9m [3.288168476s] May 26 23:39:52.933: INFO: Created: latency-svc-7ns7r May 26 23:39:52.949: INFO: Got endpoints: latency-svc-7ns7r [3.267239619s] May 26 23:39:52.978: INFO: Created: latency-svc-npbj5 May 26 23:39:53.074: INFO: Got endpoints: latency-svc-npbj5 [3.347681929s] May 26 23:39:53.104: INFO: Created: latency-svc-v5mg8 May 26 23:39:53.123: INFO: Got endpoints: latency-svc-v5mg8 [3.303116267s] May 26 23:39:53.143: INFO: Created: latency-svc-qr867 May 26 23:39:53.167: INFO: Got endpoints: latency-svc-qr867 [3.299059807s] May 26 23:39:53.235: INFO: Created: latency-svc-jqn29 May 26 23:39:53.238: INFO: Got endpoints: latency-svc-jqn29 [3.241452348s] May 26 23:39:53.265: INFO: Created: latency-svc-txj24 May 26 23:39:53.296: INFO: Got endpoints: latency-svc-txj24 [1.203302084s] May 26 23:39:53.326: INFO: Created: latency-svc-tpvcd May 26 23:39:53.391: INFO: Got endpoints: latency-svc-tpvcd [1.23969052s] May 26 23:39:53.394: INFO: Created: latency-svc-nzl6l May 26 23:39:53.398: INFO: Got endpoints: latency-svc-nzl6l [1.046354431s] May 26 23:39:53.425: INFO: Created: latency-svc-44h7x May 26 23:39:53.440: INFO: Got endpoints: latency-svc-44h7x [949.541882ms] May 26 23:39:53.462: INFO: Created: latency-svc-f966j May 26 23:39:53.478: INFO: Got endpoints: latency-svc-f966j [883.289001ms] May 26 23:39:53.540: INFO: Created: latency-svc-9gm66 May 26 23:39:53.566: INFO: Got endpoints: latency-svc-9gm66 [937.642228ms] May 26 23:39:53.567: INFO: Created: latency-svc-pjjgt May 26 23:39:53.590: INFO: Got endpoints: latency-svc-pjjgt [924.549138ms] May 26 23:39:53.628: INFO: Created: latency-svc-xzt65 May 26 23:39:53.678: INFO: Got endpoints: latency-svc-xzt65 [897.691265ms] May 26 23:39:53.689: INFO: Created: latency-svc-nt6hx May 26 23:39:53.706: INFO: Got endpoints: latency-svc-nt6hx [860.519648ms] May 26 23:39:53.731: INFO: Created: latency-svc-f57zs May 26 23:39:53.749: INFO: Got endpoints: latency-svc-f57zs [825.337594ms] May 26 23:39:53.857: INFO: Created: latency-svc-lds4z May 26 23:39:53.887: INFO: Got endpoints: latency-svc-lds4z [938.106307ms] May 26 23:39:53.888: INFO: Created: latency-svc-27hc2 May 26 23:39:53.906: INFO: Got endpoints: latency-svc-27hc2 [832.360295ms] May 26 23:39:54.019: INFO: Created: latency-svc-fhbgt May 26 23:39:54.058: INFO: Got endpoints: latency-svc-fhbgt [934.264824ms] May 26 23:39:54.058: INFO: Created: latency-svc-qh4g2 May 26 23:39:54.088: INFO: Got endpoints: latency-svc-qh4g2 [921.33625ms] May 26 23:39:54.175: INFO: Created: latency-svc-c4gdh May 26 23:39:54.183: INFO: Got endpoints: latency-svc-c4gdh [944.257669ms] May 26 23:39:54.251: INFO: Created: latency-svc-frqcc May 26 23:39:54.322: INFO: Got endpoints: latency-svc-frqcc [1.025314547s] May 26 23:39:54.403: INFO: Created: latency-svc-dzj4s May 26 23:39:54.411: INFO: Got endpoints: latency-svc-dzj4s [1.020140859s] May 26 23:39:54.463: INFO: Created: latency-svc-xpxrj May 26 23:39:54.466: INFO: Got endpoints: latency-svc-xpxrj [1.067615805s] May 26 23:39:54.537: INFO: Created: latency-svc-7r4d5 May 26 23:39:54.624: INFO: Got endpoints: latency-svc-7r4d5 [1.183672502s] May 26 23:39:54.636: INFO: Created: latency-svc-dc8p2 May 26 23:39:54.653: INFO: Got endpoints: latency-svc-dc8p2 [1.175206277s] May 26 23:39:54.686: INFO: Created: latency-svc-mskhh May 26 23:39:54.701: INFO: Got endpoints: latency-svc-mskhh [1.13473793s] May 26 23:39:54.721: INFO: Created: latency-svc-dg4rt May 26 23:39:54.793: INFO: Got endpoints: latency-svc-dg4rt [1.202352993s] May 26 23:39:54.814: INFO: Created: latency-svc-85qhv May 26 23:39:54.834: INFO: Got endpoints: latency-svc-85qhv [1.156744217s] May 26 23:39:54.870: INFO: Created: latency-svc-2lcnl May 26 23:39:54.888: INFO: Got endpoints: latency-svc-2lcnl [1.181821892s] May 26 23:39:56.287: INFO: Created: latency-svc-mdgdr May 26 23:39:56.308: INFO: Got endpoints: latency-svc-mdgdr [2.55874655s] May 26 23:39:56.451: INFO: Created: latency-svc-m28jc May 26 23:39:56.482: INFO: Got endpoints: latency-svc-m28jc [2.594674608s] May 26 23:39:56.483: INFO: Created: latency-svc-6h2tn May 26 23:39:56.504: INFO: Got endpoints: latency-svc-6h2tn [2.597855795s] May 26 23:39:56.527: INFO: Created: latency-svc-hl6l9 May 26 23:39:56.546: INFO: Got endpoints: latency-svc-hl6l9 [2.48775988s] May 26 23:39:56.612: INFO: Created: latency-svc-tc9lg May 26 23:39:56.617: INFO: Got endpoints: latency-svc-tc9lg [2.529041489s] May 26 23:39:56.643: INFO: Created: latency-svc-dj9n2 May 26 23:39:56.660: INFO: Got endpoints: latency-svc-dj9n2 [2.476891867s] May 26 23:39:56.686: INFO: Created: latency-svc-rq7bz May 26 23:39:56.697: INFO: Got endpoints: latency-svc-rq7bz [2.375368996s] May 26 23:39:56.791: INFO: Created: latency-svc-4n9fr May 26 23:39:56.805: INFO: Got endpoints: latency-svc-4n9fr [2.393846157s] May 26 23:39:56.882: INFO: Created: latency-svc-qglrb May 26 23:39:56.890: INFO: Got endpoints: latency-svc-qglrb [2.423552326s] May 26 23:39:56.914: INFO: Created: latency-svc-r9bqm May 26 23:39:56.926: INFO: Got endpoints: latency-svc-r9bqm [2.30163332s] May 26 23:39:56.953: INFO: Created: latency-svc-hzw5d May 26 23:39:56.969: INFO: Got endpoints: latency-svc-hzw5d [2.315869581s] May 26 23:39:57.031: INFO: Created: latency-svc-8png7 May 26 23:39:57.034: INFO: Got endpoints: latency-svc-8png7 [2.333530278s] May 26 23:39:57.070: INFO: Created: latency-svc-pzbsn May 26 23:39:57.089: INFO: Got endpoints: latency-svc-pzbsn [2.29669124s] May 26 23:39:57.112: INFO: Created: latency-svc-r5984 May 26 23:39:57.126: INFO: Got endpoints: latency-svc-r5984 [2.291210446s] May 26 23:39:57.194: INFO: Created: latency-svc-dvqr8 May 26 23:39:57.207: INFO: Got endpoints: latency-svc-dvqr8 [2.31882515s] May 26 23:39:57.228: INFO: Created: latency-svc-rb9m4 May 26 23:39:57.246: INFO: Got endpoints: latency-svc-rb9m4 [937.993415ms] May 26 23:39:57.276: INFO: Created: latency-svc-6dvgq May 26 23:39:57.378: INFO: Got endpoints: latency-svc-6dvgq [896.251372ms] May 26 23:39:57.381: INFO: Created: latency-svc-k5m9s May 26 23:39:57.384: INFO: Got endpoints: latency-svc-k5m9s [880.555632ms] May 26 23:39:57.408: INFO: Created: latency-svc-k4z4w May 26 23:39:57.445: INFO: Got endpoints: latency-svc-k4z4w [899.336492ms] May 26 23:39:57.517: INFO: Created: latency-svc-5kd2g May 26 23:39:57.519: INFO: Got endpoints: latency-svc-5kd2g [901.162795ms] May 26 23:39:57.549: INFO: Created: latency-svc-9ptn7 May 26 23:39:57.574: INFO: Got endpoints: latency-svc-9ptn7 [913.834018ms] May 26 23:39:57.604: INFO: Created: latency-svc-2zh7d May 26 23:39:57.614: INFO: Got endpoints: latency-svc-2zh7d [917.271848ms] May 26 23:39:57.672: INFO: Created: latency-svc-t7p2f May 26 23:39:57.685: INFO: Got endpoints: latency-svc-t7p2f [879.540261ms] May 26 23:39:57.715: INFO: Created: latency-svc-gdk58 May 26 23:39:57.729: INFO: Got endpoints: latency-svc-gdk58 [839.704316ms] May 26 23:39:57.810: INFO: Created: latency-svc-zmj9s May 26 23:39:57.813: INFO: Got endpoints: latency-svc-zmj9s [887.662232ms] May 26 23:39:57.845: INFO: Created: latency-svc-7rvsn May 26 23:39:57.856: INFO: Got endpoints: latency-svc-7rvsn [886.988376ms] May 26 23:39:57.883: INFO: Created: latency-svc-p2gq8 May 26 23:39:57.899: INFO: Got endpoints: latency-svc-p2gq8 [864.246601ms] May 26 23:39:57.972: INFO: Created: latency-svc-hpmrt May 26 23:39:57.997: INFO: Created: latency-svc-rrgj8 May 26 23:39:57.997: INFO: Got endpoints: latency-svc-hpmrt [908.175583ms] May 26 23:39:58.029: INFO: Got endpoints: latency-svc-rrgj8 [903.456443ms] May 26 23:39:58.109: INFO: Created: latency-svc-jtrsq May 26 23:39:58.171: INFO: Got endpoints: latency-svc-jtrsq [963.677254ms] May 26 23:39:58.172: INFO: Created: latency-svc-xfxch May 26 23:39:58.264: INFO: Got endpoints: latency-svc-xfxch [1.018133992s] May 26 23:39:58.317: INFO: Created: latency-svc-pcc4b May 26 23:39:58.336: INFO: Got endpoints: latency-svc-pcc4b [957.692044ms] May 26 23:39:58.477: INFO: Created: latency-svc-dlmv6 May 26 23:39:58.511: INFO: Got endpoints: latency-svc-dlmv6 [1.126203815s] May 26 23:39:58.600: INFO: Created: latency-svc-xbdpm May 26 23:39:58.613: INFO: Got endpoints: latency-svc-xbdpm [1.168122535s] May 26 23:39:58.656: INFO: Created: latency-svc-mlm2z May 26 23:39:58.674: INFO: Got endpoints: latency-svc-mlm2z [1.154827137s] May 26 23:39:58.706: INFO: Created: latency-svc-kvxdd May 26 23:39:58.804: INFO: Got endpoints: latency-svc-kvxdd [1.230196412s] May 26 23:39:58.806: INFO: Created: latency-svc-59q7v May 26 23:39:58.812: INFO: Got endpoints: latency-svc-59q7v [1.197237803s] May 26 23:39:58.836: INFO: Created: latency-svc-gqr54 May 26 23:39:58.855: INFO: Got endpoints: latency-svc-gqr54 [1.170466634s] May 26 23:39:58.884: INFO: Created: latency-svc-tv2qw May 26 23:39:58.953: INFO: Got endpoints: latency-svc-tv2qw [1.2239549s] May 26 23:39:59.001: INFO: Created: latency-svc-jpj65 May 26 23:39:59.017: INFO: Got endpoints: latency-svc-jpj65 [1.203750099s] May 26 23:39:59.104: INFO: Created: latency-svc-4svp6 May 26 23:39:59.118: INFO: Got endpoints: latency-svc-4svp6 [1.26236417s] May 26 23:39:59.151: INFO: Created: latency-svc-vmggg May 26 23:39:59.168: INFO: Got endpoints: latency-svc-vmggg [1.268908514s] May 26 23:39:59.247: INFO: Created: latency-svc-t676z May 26 23:39:59.258: INFO: Got endpoints: latency-svc-t676z [1.260946932s] May 26 23:39:59.286: INFO: Created: latency-svc-v4wqv May 26 23:39:59.301: INFO: Got endpoints: latency-svc-v4wqv [1.271297486s] May 26 23:39:59.327: INFO: Created: latency-svc-dsxnh May 26 23:39:59.331: INFO: Got endpoints: latency-svc-dsxnh [1.160445713s] May 26 23:39:59.379: INFO: Created: latency-svc-lbq74 May 26 23:39:59.409: INFO: Got endpoints: latency-svc-lbq74 [1.144550701s] May 26 23:39:59.445: INFO: Created: latency-svc-6955g May 26 23:39:59.464: INFO: Got endpoints: latency-svc-6955g [1.128151521s] May 26 23:39:59.510: INFO: Created: latency-svc-zrkwm May 26 23:39:59.533: INFO: Got endpoints: latency-svc-zrkwm [1.021857718s] May 26 23:39:59.569: INFO: Created: latency-svc-88h22 May 26 23:39:59.579: INFO: Got endpoints: latency-svc-88h22 [965.744541ms] May 26 23:39:59.605: INFO: Created: latency-svc-s548l May 26 23:39:59.654: INFO: Got endpoints: latency-svc-s548l [979.934613ms] May 26 23:39:59.672: INFO: Created: latency-svc-8bdwn May 26 23:39:59.688: INFO: Got endpoints: latency-svc-8bdwn [883.743775ms] May 26 23:39:59.709: INFO: Created: latency-svc-prv6x May 26 23:39:59.737: INFO: Got endpoints: latency-svc-prv6x [925.098752ms] May 26 23:39:59.792: INFO: Created: latency-svc-lvjxd May 26 23:39:59.796: INFO: Got endpoints: latency-svc-lvjxd [941.39804ms] May 26 23:39:59.820: INFO: Created: latency-svc-9v9jl May 26 23:39:59.847: INFO: Got endpoints: latency-svc-9v9jl [893.454043ms] May 26 23:39:59.877: INFO: Created: latency-svc-gqz4f May 26 23:39:59.911: INFO: Got endpoints: latency-svc-gqz4f [893.923983ms] May 26 23:39:59.925: INFO: Created: latency-svc-qpnzj May 26 23:39:59.942: INFO: Got endpoints: latency-svc-qpnzj [823.846687ms] May 26 23:39:59.964: INFO: Created: latency-svc-chmwm May 26 23:39:59.980: INFO: Got endpoints: latency-svc-chmwm [811.943129ms] May 26 23:40:00.007: INFO: Created: latency-svc-6982l May 26 23:40:00.059: INFO: Got endpoints: latency-svc-6982l [800.071096ms] May 26 23:40:00.081: INFO: Created: latency-svc-f6s4t May 26 23:40:00.101: INFO: Got endpoints: latency-svc-f6s4t [800.497284ms] May 26 23:40:00.129: INFO: Created: latency-svc-b7qnm May 26 23:40:00.148: INFO: Got endpoints: latency-svc-b7qnm [816.967473ms] May 26 23:40:00.198: INFO: Created: latency-svc-ftfhz May 26 23:40:00.215: INFO: Got endpoints: latency-svc-ftfhz [805.629154ms] May 26 23:40:00.265: INFO: Created: latency-svc-8rvkx May 26 23:40:00.324: INFO: Got endpoints: latency-svc-8rvkx [859.978231ms] May 26 23:40:00.411: INFO: Created: latency-svc-sncq9 May 26 23:40:00.450: INFO: Got endpoints: latency-svc-sncq9 [917.381479ms] May 26 23:40:00.492: INFO: Created: latency-svc-mvlqs May 26 23:40:00.516: INFO: Got endpoints: latency-svc-mvlqs [936.733348ms] May 26 23:40:00.624: INFO: Created: latency-svc-n6542 May 26 23:40:00.678: INFO: Got endpoints: latency-svc-n6542 [1.024623014s] May 26 23:40:00.679: INFO: Created: latency-svc-4jfkk May 26 23:40:00.720: INFO: Got endpoints: latency-svc-4jfkk [1.032577153s] May 26 23:40:00.777: INFO: Created: latency-svc-z9z4z May 26 23:40:00.801: INFO: Got endpoints: latency-svc-z9z4z [1.064120644s] May 26 23:40:00.832: INFO: Created: latency-svc-2dkb7 May 26 23:40:00.841: INFO: Got endpoints: latency-svc-2dkb7 [1.044826328s] May 26 23:40:00.888: INFO: Created: latency-svc-qqsrp May 26 23:40:00.891: INFO: Got endpoints: latency-svc-qqsrp [1.043814937s] May 26 23:40:00.918: INFO: Created: latency-svc-v4qwk May 26 23:40:00.932: INFO: Got endpoints: latency-svc-v4qwk [1.020417525s] May 26 23:40:00.955: INFO: Created: latency-svc-p58p7 May 26 23:40:01.025: INFO: Got endpoints: latency-svc-p58p7 [1.082631806s] May 26 23:40:01.043: INFO: Created: latency-svc-lxvnh May 26 23:40:01.053: INFO: Got endpoints: latency-svc-lxvnh [1.072844249s] May 26 23:40:01.105: INFO: Created: latency-svc-84ppb May 26 23:40:01.120: INFO: Got endpoints: latency-svc-84ppb [1.061233238s] May 26 23:40:01.181: INFO: Created: latency-svc-z2s8c May 26 23:40:01.209: INFO: Got endpoints: latency-svc-z2s8c [1.108122889s] May 26 23:40:01.238: INFO: Created: latency-svc-j8d2s May 26 23:40:01.252: INFO: Got endpoints: latency-svc-j8d2s [1.103521505s] May 26 23:40:01.319: INFO: Created: latency-svc-hq988 May 26 23:40:01.338: INFO: Created: latency-svc-bjx6g May 26 23:40:01.338: INFO: Got endpoints: latency-svc-hq988 [1.123550266s] May 26 23:40:01.368: INFO: Got endpoints: latency-svc-bjx6g [1.043642532s] May 26 23:40:01.399: INFO: Created: latency-svc-dpjtl May 26 23:40:01.416: INFO: Got endpoints: latency-svc-dpjtl [965.67657ms] May 26 23:40:01.474: INFO: Created: latency-svc-qq6jm May 26 23:40:01.481: INFO: Got endpoints: latency-svc-qq6jm [964.676062ms] May 26 23:40:01.504: INFO: Created: latency-svc-rlctk May 26 23:40:01.526: INFO: Got endpoints: latency-svc-rlctk [847.412896ms] May 26 23:40:01.539: INFO: Created: latency-svc-59lpc May 26 23:40:01.572: INFO: Got endpoints: latency-svc-59lpc [851.437703ms] May 26 23:40:01.630: INFO: Created: latency-svc-fg8f9 May 26 23:40:01.650: INFO: Got endpoints: latency-svc-fg8f9 [849.032462ms] May 26 23:40:01.682: INFO: Created: latency-svc-qj6gc May 26 23:40:01.699: INFO: Got endpoints: latency-svc-qj6gc [857.816902ms] May 26 23:40:01.718: INFO: Created: latency-svc-d6wpw May 26 23:40:01.755: INFO: Got endpoints: latency-svc-d6wpw [864.501985ms] May 26 23:40:01.773: INFO: Created: latency-svc-kfhrp May 26 23:40:01.790: INFO: Got endpoints: latency-svc-kfhrp [858.587136ms] May 26 23:40:01.811: INFO: Created: latency-svc-jmk67 May 26 23:40:01.835: INFO: Got endpoints: latency-svc-jmk67 [810.466344ms] May 26 23:40:01.887: INFO: Created: latency-svc-b2pqx May 26 23:40:01.890: INFO: Got endpoints: latency-svc-b2pqx [837.489101ms] May 26 23:40:01.953: INFO: Created: latency-svc-5d8ct May 26 23:40:01.971: INFO: Got endpoints: latency-svc-5d8ct [850.877543ms] May 26 23:40:02.019: INFO: Created: latency-svc-gdr6n May 26 23:40:02.023: INFO: Got endpoints: latency-svc-gdr6n [813.302124ms] May 26 23:40:02.051: INFO: Created: latency-svc-kdmfm May 26 23:40:02.075: INFO: Got endpoints: latency-svc-kdmfm [823.286252ms] May 26 23:40:02.106: INFO: Created: latency-svc-fqldw May 26 23:40:02.174: INFO: Got endpoints: latency-svc-fqldw [836.195444ms] May 26 23:40:02.217: INFO: Created: latency-svc-g4x9s May 26 23:40:02.237: INFO: Got endpoints: latency-svc-g4x9s [869.132435ms] May 26 23:40:02.276: INFO: Created: latency-svc-xt2mp May 26 23:40:02.312: INFO: Got endpoints: latency-svc-xt2mp [896.344148ms] May 26 23:40:02.321: INFO: Created: latency-svc-f6zgp May 26 23:40:02.334: INFO: Got endpoints: latency-svc-f6zgp [853.388422ms] May 26 23:40:02.396: INFO: Created: latency-svc-dw2jx May 26 23:40:02.456: INFO: Got endpoints: latency-svc-dw2jx [930.758263ms] May 26 23:40:02.479: INFO: Created: latency-svc-xz2wx May 26 23:40:02.497: INFO: Got endpoints: latency-svc-xz2wx [924.699983ms] May 26 23:40:02.526: INFO: Created: latency-svc-hd9j6 May 26 23:40:02.546: INFO: Got endpoints: latency-svc-hd9j6 [895.512096ms] May 26 23:40:02.631: INFO: Created: latency-svc-pl7ht May 26 23:40:02.640: INFO: Got endpoints: latency-svc-pl7ht [941.213187ms] May 26 23:40:02.682: INFO: Created: latency-svc-pwcqw May 26 23:40:02.701: INFO: Got endpoints: latency-svc-pwcqw [945.85068ms] May 26 23:40:02.798: INFO: Created: latency-svc-lrvf8 May 26 23:40:02.822: INFO: Got endpoints: latency-svc-lrvf8 [1.031586011s] May 26 23:40:02.858: INFO: Created: latency-svc-8vs88 May 26 23:40:02.917: INFO: Got endpoints: latency-svc-8vs88 [1.081928269s] May 26 23:40:02.940: INFO: Created: latency-svc-wjqg9 May 26 23:40:02.954: INFO: Got endpoints: latency-svc-wjqg9 [1.063942908s] May 26 23:40:02.975: INFO: Created: latency-svc-68h7t May 26 23:40:02.991: INFO: Got endpoints: latency-svc-68h7t [1.019897765s] May 26 23:40:03.013: INFO: Created: latency-svc-5qzl6 May 26 23:40:03.049: INFO: Got endpoints: latency-svc-5qzl6 [1.025887012s] May 26 23:40:03.063: INFO: Created: latency-svc-qg5ll May 26 23:40:03.098: INFO: Got endpoints: latency-svc-qg5ll [1.0227335s] May 26 23:40:03.134: INFO: Created: latency-svc-dmpqw May 26 23:40:03.148: INFO: Got endpoints: latency-svc-dmpqw [973.006549ms] May 26 23:40:03.195: INFO: Created: latency-svc-v4jqd May 26 23:40:03.198: INFO: Got endpoints: latency-svc-v4jqd [960.20084ms] May 26 23:40:03.221: INFO: Created: latency-svc-qpnlt May 26 23:40:03.239: INFO: Got endpoints: latency-svc-qpnlt [926.233711ms] May 26 23:40:03.266: INFO: Created: latency-svc-flr2b May 26 23:40:03.281: INFO: Got endpoints: latency-svc-flr2b [947.386001ms] May 26 23:40:03.318: INFO: Created: latency-svc-lm6qz May 26 23:40:03.322: INFO: Got endpoints: latency-svc-lm6qz [865.410812ms] May 26 23:40:03.344: INFO: Created: latency-svc-cs5fj May 26 23:40:03.354: INFO: Got endpoints: latency-svc-cs5fj [857.349654ms] May 26 23:40:03.378: INFO: Created: latency-svc-l5hvm May 26 23:40:03.397: INFO: Got endpoints: latency-svc-l5hvm [850.936276ms] May 26 23:40:03.451: INFO: Created: latency-svc-m5kcw May 26 23:40:03.474: INFO: Got endpoints: latency-svc-m5kcw [833.593587ms] May 26 23:40:03.474: INFO: Created: latency-svc-swf95 May 26 23:40:03.506: INFO: Got endpoints: latency-svc-swf95 [804.669883ms] May 26 23:40:03.506: INFO: Latencies: [342.187957ms 406.494238ms 520.627377ms 610.76447ms 689.88582ms 779.36384ms 800.071096ms 800.497284ms 804.669883ms 805.629154ms 810.466344ms 811.943129ms 813.302124ms 816.967473ms 823.286252ms 823.846687ms 825.337594ms 832.360295ms 833.593587ms 836.195444ms 837.489101ms 839.704316ms 847.412896ms 849.032462ms 850.877543ms 850.936276ms 851.437703ms 853.388422ms 857.349654ms 857.816902ms 858.587136ms 859.978231ms 860.519648ms 864.246601ms 864.501985ms 865.410812ms 869.132435ms 879.540261ms 880.555632ms 883.289001ms 883.743775ms 886.988376ms 887.662232ms 893.454043ms 893.923983ms 895.512096ms 896.251372ms 896.344148ms 897.691265ms 898.34725ms 899.336492ms 901.162795ms 903.456443ms 908.175583ms 913.834018ms 917.271848ms 917.381479ms 921.33625ms 924.549138ms 924.699983ms 925.098752ms 926.233711ms 930.758263ms 934.264824ms 936.733348ms 937.642228ms 937.993415ms 938.106307ms 941.213187ms 941.39804ms 944.257669ms 945.85068ms 947.189532ms 947.386001ms 949.541882ms 957.692044ms 960.20084ms 963.677254ms 964.676062ms 965.67657ms 965.744541ms 973.006549ms 979.934613ms 1.018133992s 1.019897765s 1.020140859s 1.020417525s 1.021857718s 1.0227335s 1.024623014s 1.025314547s 1.025887012s 1.031586011s 1.032577153s 1.043642532s 1.043814937s 1.044826328s 1.046354431s 1.053362651s 1.061233238s 1.063942908s 1.064120644s 1.067615805s 1.072844249s 1.081928269s 1.082631806s 1.091006863s 1.091455599s 1.103521505s 1.108122889s 1.112510872s 1.123550266s 1.126203815s 1.128151521s 1.130390053s 1.131614233s 1.13473793s 1.144550701s 1.154827137s 1.156744217s 1.160445713s 1.16328776s 1.165346806s 1.167743223s 1.168122535s 1.170466634s 1.173343999s 1.175206277s 1.181574656s 1.181821892s 1.183672502s 1.185275518s 1.185858593s 1.197237803s 1.202352993s 1.203302084s 1.203750099s 1.2239549s 1.228879029s 1.229774784s 1.230196412s 1.236204154s 1.236259666s 1.238498646s 1.23969052s 1.244874802s 1.25551785s 1.259408261s 1.260946932s 1.26236417s 1.268908514s 1.271297486s 1.278500847s 1.283007969s 1.288039327s 1.296284565s 1.303839442s 1.309037557s 1.317163386s 1.320265614s 1.38317818s 1.393703408s 1.395491257s 1.408248833s 1.415645046s 1.418643133s 1.435211482s 1.437533295s 1.474425111s 1.496680153s 2.291210446s 2.29669124s 2.30163332s 2.315869581s 2.31882515s 2.333530278s 2.375368996s 2.393846157s 2.423552326s 2.476891867s 2.48775988s 2.529041489s 2.55874655s 2.594674608s 2.597855795s 3.241452348s 3.267239619s 3.288168476s 3.299059807s 3.303116267s 3.316792442s 3.347681929s 3.354479523s 3.411269468s 3.411456275s 3.415096921s 3.451625204s 3.454970121s 3.550864508s 3.555051616s] May 26 23:40:03.506: INFO: 50 %ile: 1.063942908s May 26 23:40:03.506: INFO: 90 %ile: 2.48775988s May 26 23:40:03.506: INFO: 99 %ile: 3.550864508s May 26 23:40:03.506: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:40:03.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1636" for this suite. • [SLOW TEST:23.678 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":8,"skipped":154,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:40:03.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:40:07.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-980" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":9,"skipped":158,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:40:07.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8443/configmap-test-55f1f57c-cb82-4521-9e10-fb46aad7c4d0 STEP: Creating a pod to test consume configMaps May 26 23:40:08.105: INFO: Waiting up to 5m0s for pod "pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28" in namespace "configmap-8443" to be "Succeeded or Failed" May 26 23:40:08.336: INFO: Pod "pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28": Phase="Pending", Reason="", readiness=false. Elapsed: 231.027764ms May 26 23:40:10.738: INFO: Pod "pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.632379333s May 26 23:40:12.770: INFO: Pod "pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.664265944s May 26 23:40:14.842: INFO: Pod "pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.736649362s STEP: Saw pod success May 26 23:40:14.842: INFO: Pod "pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28" satisfied condition "Succeeded or Failed" May 26 23:40:14.866: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28 container env-test: STEP: delete the pod May 26 23:40:15.159: INFO: Waiting for pod pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28 to disappear May 26 23:40:15.165: INFO: Pod pod-configmaps-545bcfb9-9bfc-417e-be9f-a4089a660b28 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:40:15.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8443" for this suite. • [SLOW TEST:7.427 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:40:15.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9990 STEP: creating service affinity-clusterip-transition in namespace services-9990 STEP: creating replication controller affinity-clusterip-transition in namespace services-9990 I0526 23:40:15.692941 8 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9990, replica count: 3 I0526 23:40:18.743276 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 23:40:21.743496 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 23:40:24.743729 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 23:40:24.775: INFO: Creating new exec pod May 26 23:40:29.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9990 execpod-affinityq2h6k -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 26 23:40:30.277: INFO: stderr: "I0526 23:40:30.016108 198 log.go:172] (0xc000c5e790) (0xc00034e0a0) Create stream\nI0526 23:40:30.016164 198 log.go:172] (0xc000c5e790) (0xc00034e0a0) Stream added, broadcasting: 1\nI0526 23:40:30.029264 198 log.go:172] (0xc000c5e790) Reply frame received for 1\nI0526 23:40:30.029318 198 log.go:172] (0xc000c5e790) (0xc00034e820) Create stream\nI0526 23:40:30.029330 198 log.go:172] (0xc000c5e790) (0xc00034e820) Stream added, broadcasting: 3\nI0526 23:40:30.030393 198 log.go:172] (0xc000c5e790) Reply frame received for 3\nI0526 23:40:30.030431 198 log.go:172] (0xc000c5e790) (0xc0003ee1e0) Create stream\nI0526 23:40:30.030442 198 log.go:172] (0xc000c5e790) (0xc0003ee1e0) Stream added, broadcasting: 5\nI0526 23:40:30.031507 198 log.go:172] (0xc000c5e790) Reply frame received for 5\nI0526 23:40:30.172255 198 log.go:172] (0xc000c5e790) Data frame received for 5\nI0526 23:40:30.172275 198 log.go:172] (0xc0003ee1e0) (5) Data frame handling\nI0526 23:40:30.172282 198 log.go:172] (0xc0003ee1e0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0526 23:40:30.268615 198 log.go:172] (0xc000c5e790) Data frame received for 5\nI0526 23:40:30.268639 198 log.go:172] (0xc0003ee1e0) (5) Data frame handling\nI0526 23:40:30.268655 198 log.go:172] (0xc0003ee1e0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0526 23:40:30.268819 198 log.go:172] (0xc000c5e790) Data frame received for 5\nI0526 23:40:30.268834 198 log.go:172] (0xc0003ee1e0) (5) Data frame handling\nI0526 23:40:30.269415 198 log.go:172] (0xc000c5e790) Data frame received for 3\nI0526 23:40:30.269433 198 log.go:172] (0xc00034e820) (3) Data frame handling\nI0526 23:40:30.272324 198 log.go:172] (0xc000c5e790) Data frame received for 1\nI0526 23:40:30.272340 198 log.go:172] (0xc00034e0a0) (1) Data frame handling\nI0526 23:40:30.272351 198 log.go:172] (0xc00034e0a0) (1) Data frame sent\nI0526 23:40:30.272363 198 log.go:172] (0xc000c5e790) (0xc00034e0a0) Stream removed, broadcasting: 1\nI0526 23:40:30.272390 198 log.go:172] (0xc000c5e790) Go away received\nI0526 23:40:30.272632 198 log.go:172] (0xc000c5e790) (0xc00034e0a0) Stream removed, broadcasting: 1\nI0526 23:40:30.272647 198 log.go:172] (0xc000c5e790) (0xc00034e820) Stream removed, broadcasting: 3\nI0526 23:40:30.272653 198 log.go:172] (0xc000c5e790) (0xc0003ee1e0) Stream removed, broadcasting: 5\n" May 26 23:40:30.277: INFO: stdout: "" May 26 23:40:30.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9990 execpod-affinityq2h6k -- /bin/sh -x -c nc -zv -t -w 2 10.106.199.21 80' May 26 23:40:30.579: INFO: stderr: "I0526 23:40:30.502061 218 log.go:172] (0xc000a9d1e0) (0xc000365d60) Create stream\nI0526 23:40:30.502113 218 log.go:172] (0xc000a9d1e0) (0xc000365d60) Stream added, broadcasting: 1\nI0526 23:40:30.504879 218 log.go:172] (0xc000a9d1e0) Reply frame received for 1\nI0526 23:40:30.504906 218 log.go:172] (0xc000a9d1e0) (0xc0003fe8c0) Create stream\nI0526 23:40:30.504913 218 log.go:172] (0xc000a9d1e0) (0xc0003fe8c0) Stream added, broadcasting: 3\nI0526 23:40:30.505889 218 log.go:172] (0xc000a9d1e0) Reply frame received for 3\nI0526 23:40:30.505917 218 log.go:172] (0xc000a9d1e0) (0xc0003fef00) Create stream\nI0526 23:40:30.505926 218 log.go:172] (0xc000a9d1e0) (0xc0003fef00) Stream added, broadcasting: 5\nI0526 23:40:30.506586 218 log.go:172] (0xc000a9d1e0) Reply frame received for 5\nI0526 23:40:30.573507 218 log.go:172] (0xc000a9d1e0) Data frame received for 5\nI0526 23:40:30.573549 218 log.go:172] (0xc0003fef00) (5) Data frame handling\nI0526 23:40:30.573566 218 log.go:172] (0xc0003fef00) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.199.21 80\nConnection to 10.106.199.21 80 port [tcp/http] succeeded!\nI0526 23:40:30.573591 218 log.go:172] (0xc000a9d1e0) Data frame received for 3\nI0526 23:40:30.573599 218 log.go:172] (0xc0003fe8c0) (3) Data frame handling\nI0526 23:40:30.573651 218 log.go:172] (0xc000a9d1e0) Data frame received for 5\nI0526 23:40:30.573670 218 log.go:172] (0xc0003fef00) (5) Data frame handling\nI0526 23:40:30.575079 218 log.go:172] (0xc000a9d1e0) Data frame received for 1\nI0526 23:40:30.575100 218 log.go:172] (0xc000365d60) (1) Data frame handling\nI0526 23:40:30.575127 218 log.go:172] (0xc000365d60) (1) Data frame sent\nI0526 23:40:30.575173 218 log.go:172] (0xc000a9d1e0) (0xc000365d60) Stream removed, broadcasting: 1\nI0526 23:40:30.575251 218 log.go:172] (0xc000a9d1e0) Go away received\nI0526 23:40:30.575647 218 log.go:172] (0xc000a9d1e0) (0xc000365d60) Stream removed, broadcasting: 1\nI0526 23:40:30.575674 218 log.go:172] (0xc000a9d1e0) (0xc0003fe8c0) Stream removed, broadcasting: 3\nI0526 23:40:30.575692 218 log.go:172] (0xc000a9d1e0) (0xc0003fef00) Stream removed, broadcasting: 5\n" May 26 23:40:30.579: INFO: stdout: "" May 26 23:40:30.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9990 execpod-affinityq2h6k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.199.21:80/ ; done' May 26 23:40:31.114: INFO: stderr: "I0526 23:40:30.799622 235 log.go:172] (0xc0008b2000) (0xc00091e5a0) Create stream\nI0526 23:40:30.799694 235 log.go:172] (0xc0008b2000) (0xc00091e5a0) Stream added, broadcasting: 1\nI0526 23:40:30.802065 235 log.go:172] (0xc0008b2000) Reply frame received for 1\nI0526 23:40:30.802094 235 log.go:172] (0xc0008b2000) (0xc0009126e0) Create stream\nI0526 23:40:30.802103 235 log.go:172] (0xc0008b2000) (0xc0009126e0) Stream added, broadcasting: 3\nI0526 23:40:30.802721 235 log.go:172] (0xc0008b2000) Reply frame received for 3\nI0526 23:40:30.802745 235 log.go:172] (0xc0008b2000) (0xc00091f540) Create stream\nI0526 23:40:30.802752 235 log.go:172] (0xc0008b2000) (0xc00091f540) Stream added, broadcasting: 5\nI0526 23:40:30.803455 235 log.go:172] (0xc0008b2000) Reply frame received for 5\nI0526 23:40:30.860600 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:30.860636 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:30.860649 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:30.860665 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:30.860673 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:30.860686 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.000287 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.000319 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.000337 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.001324 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.001355 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.001382 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.001491 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.001520 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.001542 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.010363 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.010394 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.010416 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.011241 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.011258 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.011267 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.011278 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.011283 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.011288 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.018913 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.019021 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.019153 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.019604 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.019636 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.019644 235 log.go:172] (0xc00091f540) (5) Data frame sent\nI0526 23:40:31.019650 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.019654 235 log.go:172] (0xc00091f540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.019666 235 log.go:172] (0xc00091f540) (5) Data frame sent\nI0526 23:40:31.019675 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.019682 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.019695 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.025313 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.025334 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.025354 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.026119 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.026151 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.026165 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.026186 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.026198 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.026213 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.037078 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.037100 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.037248 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.038024 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.038037 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.038049 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.038057 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.038062 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.038071 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.044638 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.044658 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.044671 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.044974 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.044988 235 log.go:172] (0xc00091f540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/I0526 23:40:31.045003 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.045027 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.045052 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.045067 235 log.go:172] (0xc00091f540) (5) Data frame sent\nI0526 23:40:31.045076 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.045086 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.045103 235 log.go:172] (0xc00091f540) (5) Data frame sent\n\nI0526 23:40:31.052763 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.052787 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.052798 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.053326 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.053349 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.053360 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.053375 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.053384 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.053392 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.060014 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.060035 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.060051 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.060403 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.060415 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.060423 235 log.go:172] (0xc00091f540) (5) Data frame sent\nI0526 23:40:31.060431 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.060443 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.060454 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.060463 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.060471 235 log.go:172] (0xc0009126e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.060485 235 log.go:172] (0xc00091f540) (5) Data frame sent\nI0526 23:40:31.065618 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.065643 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.065661 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.066577 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.066596 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.066609 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.066684 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.066702 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.066714 235 log.go:172] (0xc00091f540) (5) Data frame sent\nI0526 23:40:31.066724 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.066733 235 log.go:172] (0xc00091f540) (5) Data frame handling\n+ echo\nI0526 23:40:31.066754 235 log.go:172] (0xc00091f540) (5) Data frame sent\nI0526 23:40:31.066816 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.066828 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.066839 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.071704 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.071720 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.071736 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.072194 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.072270 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.072290 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.072308 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.072324 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.072341 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.076918 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.076942 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.076957 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.077473 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.077490 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.077501 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.077520 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.077535 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.077540 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.081373 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.081392 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.081404 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.081814 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.081830 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.081836 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.081852 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.081861 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.081867 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.088491 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.088507 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.088524 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.088944 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.088964 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.088984 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.089008 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.089021 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.089033 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.093730 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.093751 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.093772 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.094109 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.094119 235 log.go:172] (0xc00091f540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.094128 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.094148 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.094160 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.094171 235 log.go:172] (0xc00091f540) (5) Data frame sent\nI0526 23:40:31.098453 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.098476 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.098497 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.099415 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.099487 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.099511 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.099527 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.099538 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.099549 235 log.go:172] (0xc00091f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.107327 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.107347 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.107360 235 log.go:172] (0xc0009126e0) (3) Data frame sent\nI0526 23:40:31.107916 235 log.go:172] (0xc0008b2000) Data frame received for 3\nI0526 23:40:31.107930 235 log.go:172] (0xc0009126e0) (3) Data frame handling\nI0526 23:40:31.107943 235 log.go:172] (0xc0008b2000) Data frame received for 5\nI0526 23:40:31.107948 235 log.go:172] (0xc00091f540) (5) Data frame handling\nI0526 23:40:31.109376 235 log.go:172] (0xc0008b2000) Data frame received for 1\nI0526 23:40:31.109399 235 log.go:172] (0xc00091e5a0) (1) Data frame handling\nI0526 23:40:31.109417 235 log.go:172] (0xc00091e5a0) (1) Data frame sent\nI0526 23:40:31.109548 235 log.go:172] (0xc0008b2000) (0xc00091e5a0) Stream removed, broadcasting: 1\nI0526 23:40:31.109576 235 log.go:172] (0xc0008b2000) Go away received\nI0526 23:40:31.109842 235 log.go:172] (0xc0008b2000) (0xc00091e5a0) Stream removed, broadcasting: 1\nI0526 23:40:31.109862 235 log.go:172] (0xc0008b2000) (0xc0009126e0) Stream removed, broadcasting: 3\nI0526 23:40:31.109874 235 log.go:172] (0xc0008b2000) (0xc00091f540) Stream removed, broadcasting: 5\n" May 26 23:40:31.114: INFO: stdout: "\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-mpx8m\naffinity-clusterip-transition-8vst2\naffinity-clusterip-transition-8vst2\naffinity-clusterip-transition-mpx8m\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-8vst2\naffinity-clusterip-transition-mpx8m\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-8vst2\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-8vst2\naffinity-clusterip-transition-mpx8m\naffinity-clusterip-transition-mpx8m\naffinity-clusterip-transition-mpx8m" May 26 23:40:31.114: INFO: Received response from host: May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-mpx8m May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-8vst2 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-8vst2 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-mpx8m May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-8vst2 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-mpx8m May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-8vst2 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-8vst2 May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-mpx8m May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-mpx8m May 26 23:40:31.114: INFO: Received response from host: affinity-clusterip-transition-mpx8m May 26 23:40:31.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9990 execpod-affinityq2h6k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.199.21:80/ ; done' May 26 23:40:31.547: INFO: stderr: "I0526 23:40:31.362943 252 log.go:172] (0xc000920fd0) (0xc0007fafa0) Create stream\nI0526 23:40:31.363026 252 log.go:172] (0xc000920fd0) (0xc0007fafa0) Stream added, broadcasting: 1\nI0526 23:40:31.366421 252 log.go:172] (0xc000920fd0) Reply frame received for 1\nI0526 23:40:31.366469 252 log.go:172] (0xc000920fd0) (0xc0007fb540) Create stream\nI0526 23:40:31.366487 252 log.go:172] (0xc000920fd0) (0xc0007fb540) Stream added, broadcasting: 3\nI0526 23:40:31.367697 252 log.go:172] (0xc000920fd0) Reply frame received for 3\nI0526 23:40:31.367741 252 log.go:172] (0xc000920fd0) (0xc000808640) Create stream\nI0526 23:40:31.367759 252 log.go:172] (0xc000920fd0) (0xc000808640) Stream added, broadcasting: 5\nI0526 23:40:31.368731 252 log.go:172] (0xc000920fd0) Reply frame received for 5\nI0526 23:40:31.430254 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.430282 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.430290 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.430297 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.430302 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.430308 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.434661 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.434689 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.434712 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.440085 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.440105 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.440125 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.440151 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.440161 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.440178 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.443515 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.443537 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.443564 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.446283 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.446298 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.446320 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.446334 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.446348 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.446358 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.450470 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.450487 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.450529 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.451103 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.451130 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.451147 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.451164 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.451194 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.451209 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.457715 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.457740 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.457752 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.458393 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.458408 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.458420 252 log.go:172] (0xc000808640) (5) Data frame sent\nI0526 23:40:31.458425 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.458430 252 log.go:172] (0xc000808640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.458444 252 log.go:172] (0xc000808640) (5) Data frame sent\nI0526 23:40:31.458449 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.458453 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.458458 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.465724 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.465747 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.465758 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.465775 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.465787 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.465798 252 log.go:172] (0xc000808640) (5) Data frame sent\nI0526 23:40:31.465817 252 log.go:172] (0xc000920fd0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0526 23:40:31.465840 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.465886 252 log.go:172] (0xc000808640) (5) Data frame sent\n http://10.106.199.21:80/\nI0526 23:40:31.465905 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.465914 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.465926 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.472903 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.472916 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.472926 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.473743 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.473762 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.473778 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.473844 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.473860 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.473870 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.477047 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.477069 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.477087 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.477655 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.477667 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.477680 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.477700 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.477707 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.477718 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.482616 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.482632 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.482646 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.482881 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.482902 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.482922 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.483002 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.483018 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.483037 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.490418 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.490434 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.490451 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.494623 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.494645 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.494655 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.494667 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.494685 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.494695 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.498260 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.498276 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.498289 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.498642 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.498661 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.498675 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.498686 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.498702 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.498709 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.505680 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.505692 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.505702 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.506022 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.506037 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.506056 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.506143 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.506157 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.506173 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.512855 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.512869 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.512885 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.513443 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.513466 252 log.go:172] (0xc000808640) (5) Data frame handling\n+ I0526 23:40:31.513520 252 log.go:172] (0xc000808640) (5) Data frame sent\nI0526 23:40:31.513620 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.513640 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.513658 252 log.go:172] (0xc000808640) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.513671 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.513678 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.513685 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.521290 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.521319 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.521346 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.521572 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.521591 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.521629 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.521650 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.521688 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.521708 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.525859 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.525872 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.525887 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.526363 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.526375 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.526383 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.526391 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.526426 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.526445 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.532263 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.532278 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.532298 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.532655 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.532670 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.532685 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.532694 252 log.go:172] (0xc000808640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.199.21:80/\nI0526 23:40:31.532705 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.532717 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.539654 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.539676 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.539693 252 log.go:172] (0xc0007fb540) (3) Data frame sent\nI0526 23:40:31.540056 252 log.go:172] (0xc000920fd0) Data frame received for 5\nI0526 23:40:31.540074 252 log.go:172] (0xc000808640) (5) Data frame handling\nI0526 23:40:31.540180 252 log.go:172] (0xc000920fd0) Data frame received for 3\nI0526 23:40:31.540196 252 log.go:172] (0xc0007fb540) (3) Data frame handling\nI0526 23:40:31.541768 252 log.go:172] (0xc000920fd0) Data frame received for 1\nI0526 23:40:31.541790 252 log.go:172] (0xc0007fafa0) (1) Data frame handling\nI0526 23:40:31.541802 252 log.go:172] (0xc0007fafa0) (1) Data frame sent\nI0526 23:40:31.541815 252 log.go:172] (0xc000920fd0) (0xc0007fafa0) Stream removed, broadcasting: 1\nI0526 23:40:31.541833 252 log.go:172] (0xc000920fd0) Go away received\nI0526 23:40:31.542161 252 log.go:172] (0xc000920fd0) (0xc0007fafa0) Stream removed, broadcasting: 1\nI0526 23:40:31.542175 252 log.go:172] (0xc000920fd0) (0xc0007fb540) Stream removed, broadcasting: 3\nI0526 23:40:31.542182 252 log.go:172] (0xc000920fd0) (0xc000808640) Stream removed, broadcasting: 5\n" May 26 23:40:31.548: INFO: stdout: "\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66\naffinity-clusterip-transition-rfk66" May 26 23:40:31.548: INFO: Received response from host: May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Received response from host: affinity-clusterip-transition-rfk66 May 26 23:40:31.548: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-9990, will wait for the garbage collector to delete the pods May 26 23:40:31.848: INFO: Deleting ReplicationController affinity-clusterip-transition took: 113.505518ms May 26 23:40:32.248: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.228488ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:40:45.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9990" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:30.258 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":11,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:40:45.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 23:40:49.700: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:40:49.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7502" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:40:49.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 26 23:40:50.119: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 26 23:40:50.144: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 26 23:40:50.144: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 26 23:40:50.162: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 26 23:40:50.162: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 26 23:40:50.265: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 26 23:40:50.265: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 26 23:40:57.774: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:40:57.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-86" for this suite. • [SLOW TEST:7.890 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":13,"skipped":315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:40:57.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-27223be9-0453-46c1-8dd9-af4bd2877dcd STEP: Creating a pod to test consume secrets May 26 23:40:58.032: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6" in namespace "projected-694" to be "Succeeded or Failed" May 26 23:40:58.035: INFO: Pod "pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.628927ms May 26 23:41:00.040: INFO: Pod "pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008292777s May 26 23:41:02.044: INFO: Pod "pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6": Phase="Running", Reason="", readiness=true. Elapsed: 4.012796728s May 26 23:41:04.086: INFO: Pod "pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054070497s STEP: Saw pod success May 26 23:41:04.086: INFO: Pod "pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6" satisfied condition "Succeeded or Failed" May 26 23:41:04.089: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6 container projected-secret-volume-test: STEP: delete the pod May 26 23:41:04.251: INFO: Waiting for pod pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6 to disappear May 26 23:41:04.397: INFO: Pod pod-projected-secrets-21a55bc8-3d7d-4d3f-a745-6b40866c3bd6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:04.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-694" for this suite. • [SLOW TEST:6.529 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:04.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 26 23:41:09.796: INFO: Successfully updated pod "annotationupdate7e1794a4-d610-48a2-8ec5-28dc330872e5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:13.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7336" for this suite. • [SLOW TEST:9.442 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":366,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:13.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 26 23:41:13.970: INFO: Waiting up to 5m0s for pod "pod-6ba55886-606e-4952-95be-332bb1a49e60" in namespace "emptydir-2526" to be "Succeeded or Failed" May 26 23:41:13.995: INFO: Pod "pod-6ba55886-606e-4952-95be-332bb1a49e60": Phase="Pending", Reason="", readiness=false. Elapsed: 24.522258ms May 26 23:41:16.002: INFO: Pod "pod-6ba55886-606e-4952-95be-332bb1a49e60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031277359s May 26 23:41:18.174: INFO: Pod "pod-6ba55886-606e-4952-95be-332bb1a49e60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.203306597s STEP: Saw pod success May 26 23:41:18.174: INFO: Pod "pod-6ba55886-606e-4952-95be-332bb1a49e60" satisfied condition "Succeeded or Failed" May 26 23:41:18.176: INFO: Trying to get logs from node latest-worker2 pod pod-6ba55886-606e-4952-95be-332bb1a49e60 container test-container: STEP: delete the pod May 26 23:41:18.313: INFO: Waiting for pod pod-6ba55886-606e-4952-95be-332bb1a49e60 to disappear May 26 23:41:18.330: INFO: Pod pod-6ba55886-606e-4952-95be-332bb1a49e60 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:18.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2526" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":16,"skipped":382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:18.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 26 23:41:18.426: INFO: Waiting up to 5m0s for pod "pod-bb3204f2-95c6-4719-bcc7-9189ccdeaefb" in namespace "emptydir-7017" to be "Succeeded or Failed" May 26 23:41:18.458: INFO: Pod "pod-bb3204f2-95c6-4719-bcc7-9189ccdeaefb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.278913ms May 26 23:41:20.462: INFO: Pod "pod-bb3204f2-95c6-4719-bcc7-9189ccdeaefb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035685025s May 26 23:41:22.466: INFO: Pod "pod-bb3204f2-95c6-4719-bcc7-9189ccdeaefb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03958873s STEP: Saw pod success May 26 23:41:22.466: INFO: Pod "pod-bb3204f2-95c6-4719-bcc7-9189ccdeaefb" satisfied condition "Succeeded or Failed" May 26 23:41:22.468: INFO: Trying to get logs from node latest-worker pod pod-bb3204f2-95c6-4719-bcc7-9189ccdeaefb container test-container: STEP: delete the pod May 26 23:41:22.515: INFO: Waiting for pod pod-bb3204f2-95c6-4719-bcc7-9189ccdeaefb to disappear May 26 23:41:22.519: INFO: Pod pod-bb3204f2-95c6-4719-bcc7-9189ccdeaefb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:22.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7017" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:22.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 23:41:22.732: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddd8f40b-191d-42bf-8d67-d8cf708d6a43" in namespace "downward-api-1976" to be "Succeeded or Failed" May 26 23:41:22.778: INFO: Pod "downwardapi-volume-ddd8f40b-191d-42bf-8d67-d8cf708d6a43": Phase="Pending", Reason="", readiness=false. Elapsed: 45.201855ms May 26 23:41:24.781: INFO: Pod "downwardapi-volume-ddd8f40b-191d-42bf-8d67-d8cf708d6a43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048886174s May 26 23:41:26.803: INFO: Pod "downwardapi-volume-ddd8f40b-191d-42bf-8d67-d8cf708d6a43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070492717s STEP: Saw pod success May 26 23:41:26.803: INFO: Pod "downwardapi-volume-ddd8f40b-191d-42bf-8d67-d8cf708d6a43" satisfied condition "Succeeded or Failed" May 26 23:41:26.805: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ddd8f40b-191d-42bf-8d67-d8cf708d6a43 container client-container: STEP: delete the pod May 26 23:41:26.847: INFO: Waiting for pod downwardapi-volume-ddd8f40b-191d-42bf-8d67-d8cf708d6a43 to disappear May 26 23:41:26.857: INFO: Pod downwardapi-volume-ddd8f40b-191d-42bf-8d67-d8cf708d6a43 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:26.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1976" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":448,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:26.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-f59bcc49-23c7-4776-b65a-0618be0e338f STEP: Creating a pod to test consume secrets May 26 23:41:27.368: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dd9f8e19-934a-4281-84c4-d7c287492094" in namespace "projected-7266" to be "Succeeded or Failed" May 26 23:41:27.403: INFO: Pod "pod-projected-secrets-dd9f8e19-934a-4281-84c4-d7c287492094": Phase="Pending", Reason="", readiness=false. Elapsed: 34.562856ms May 26 23:41:29.407: INFO: Pod "pod-projected-secrets-dd9f8e19-934a-4281-84c4-d7c287492094": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038656437s May 26 23:41:31.411: INFO: Pod "pod-projected-secrets-dd9f8e19-934a-4281-84c4-d7c287492094": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043161087s STEP: Saw pod success May 26 23:41:31.411: INFO: Pod "pod-projected-secrets-dd9f8e19-934a-4281-84c4-d7c287492094" satisfied condition "Succeeded or Failed" May 26 23:41:31.415: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-dd9f8e19-934a-4281-84c4-d7c287492094 container secret-volume-test: STEP: delete the pod May 26 23:41:31.440: INFO: Waiting for pod pod-projected-secrets-dd9f8e19-934a-4281-84c4-d7c287492094 to disappear May 26 23:41:31.504: INFO: Pod pod-projected-secrets-dd9f8e19-934a-4281-84c4-d7c287492094 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:31.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7266" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":19,"skipped":456,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:31.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 23:41:32.378: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 23:41:34.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133292, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133292, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133292, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133292, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 23:41:37.517: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:37.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1492" for this suite. STEP: Destroying namespace "webhook-1492-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.164 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":20,"skipped":460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:37.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:53.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9714" for this suite. STEP: Destroying namespace "nsdeletetest-8650" for this suite. May 26 23:41:53.096: INFO: Namespace nsdeletetest-8650 was already deleted STEP: Destroying namespace "nsdeletetest-6503" for this suite. • [SLOW TEST:15.447 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":21,"skipped":492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:53.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 26 23:41:53.242: INFO: Waiting up to 5m0s for pod "downward-api-50ecae99-2549-48a3-9563-a9e80b21a1eb" in namespace "downward-api-1770" to be "Succeeded or Failed" May 26 23:41:53.283: INFO: Pod "downward-api-50ecae99-2549-48a3-9563-a9e80b21a1eb": Phase="Pending", Reason="", readiness=false. Elapsed: 40.370369ms May 26 23:41:55.287: INFO: Pod "downward-api-50ecae99-2549-48a3-9563-a9e80b21a1eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044899266s May 26 23:41:57.291: INFO: Pod "downward-api-50ecae99-2549-48a3-9563-a9e80b21a1eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048988213s STEP: Saw pod success May 26 23:41:57.291: INFO: Pod "downward-api-50ecae99-2549-48a3-9563-a9e80b21a1eb" satisfied condition "Succeeded or Failed" May 26 23:41:57.294: INFO: Trying to get logs from node latest-worker2 pod downward-api-50ecae99-2549-48a3-9563-a9e80b21a1eb container dapi-container: STEP: delete the pod May 26 23:41:57.321: INFO: Waiting for pod downward-api-50ecae99-2549-48a3-9563-a9e80b21a1eb to disappear May 26 23:41:57.349: INFO: Pod downward-api-50ecae99-2549-48a3-9563-a9e80b21a1eb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:41:57.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1770" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":544,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:41:57.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 26 23:42:01.492: INFO: &Pod{ObjectMeta:{send-events-a4ef49db-93c7-4592-ba16-1131a594dd9f events-2457 /api/v1/namespaces/events-2457/pods/send-events-a4ef49db-93c7-4592-ba16-1131a594dd9f 9995c4dc-6629-4892-9fac-89cd13f8a335 7936408 0 2020-05-26 23:41:57 +0000 UTC map[name:foo time:429502734] map[] [] [] [{e2e.test Update v1 2020-05-26 23:41:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:42:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrhvm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrhvm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrhvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:41:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:42:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:42:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:41:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.55,StartTime:2020-05-26 23:41:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:42:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://52a8db9fb054e040b570148ea1b022f60da5b02e40fc062c098f617ca7c4edc3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 26 23:42:03.521: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 26 23:42:05.526: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:42:05.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2457" for this suite. • [SLOW TEST:8.224 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":23,"skipped":556,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:42:05.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 23:42:06.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 23:42:08.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133326, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133326, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133326, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726133326, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 23:42:11.733: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 26 23:42:11.773: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:42:11.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7581" for this suite. STEP: Destroying namespace "webhook-7581-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.431 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":24,"skipped":556,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:42:12.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 26 23:42:12.077: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 23:42:12.117: INFO: Waiting for terminating namespaces to be deleted... May 26 23:42:12.120: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 26 23:42:12.124: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 26 23:42:12.125: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 26 23:42:12.125: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 26 23:42:12.125: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 26 23:42:12.125: INFO: send-events-a4ef49db-93c7-4592-ba16-1131a594dd9f from events-2457 started at 2020-05-26 23:41:57 +0000 UTC (1 container statuses recorded) May 26 23:42:12.125: INFO: Container p ready: true, restart count 0 May 26 23:42:12.125: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 23:42:12.125: INFO: Container kindnet-cni ready: true, restart count 2 May 26 23:42:12.125: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 23:42:12.125: INFO: Container kube-proxy ready: true, restart count 0 May 26 23:42:12.125: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 26 23:42:12.129: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 26 23:42:12.129: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 26 23:42:12.129: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 26 23:42:12.129: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 26 23:42:12.129: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 23:42:12.129: INFO: Container kindnet-cni ready: true, restart count 2 May 26 23:42:12.129: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 23:42:12.129: INFO: Container kube-proxy ready: true, restart count 0 May 26 23:42:12.129: INFO: sample-webhook-deployment-75dd644756-kl4sh from webhook-7581 started at 2020-05-26 23:42:06 +0000 UTC (1 container statuses recorded) May 26 23:42:12.129: INFO: Container sample-webhook ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1612b88c89528613], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1612b88c8a744115], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:42:13.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5906" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":25,"skipped":572,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:42:13.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 23:42:13.211: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 26 23:42:15.355: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:42:16.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3635" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":26,"skipped":577,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:42:16.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 26 23:42:20.796: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1070 PodName:var-expansion-c89d1597-a30e-4b50-8fcd-ceb5dffef1b9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 23:42:20.796: INFO: >>> kubeConfig: /root/.kube/config I0526 23:42:20.836131 8 log.go:172] (0xc0023be9a0) (0xc00200f2c0) Create stream I0526 23:42:20.836160 8 log.go:172] (0xc0023be9a0) (0xc00200f2c0) Stream added, broadcasting: 1 I0526 23:42:20.839662 8 log.go:172] (0xc0023be9a0) Reply frame received for 1 I0526 23:42:20.839701 8 log.go:172] (0xc0023be9a0) (0xc001b36fa0) Create stream I0526 23:42:20.839717 8 log.go:172] (0xc0023be9a0) (0xc001b36fa0) Stream added, broadcasting: 3 I0526 23:42:20.840578 8 log.go:172] (0xc0023be9a0) Reply frame received for 3 I0526 23:42:20.840616 8 log.go:172] (0xc0023be9a0) (0xc00200f360) Create stream I0526 23:42:20.840632 8 log.go:172] (0xc0023be9a0) (0xc00200f360) Stream added, broadcasting: 5 I0526 23:42:20.841812 8 log.go:172] (0xc0023be9a0) Reply frame received for 5 I0526 23:42:20.929286 8 log.go:172] (0xc0023be9a0) Data frame received for 3 I0526 23:42:20.929322 8 log.go:172] (0xc001b36fa0) (3) Data frame handling I0526 23:42:20.929408 8 log.go:172] (0xc0023be9a0) Data frame received for 5 I0526 23:42:20.929437 8 log.go:172] (0xc00200f360) (5) Data frame handling I0526 23:42:20.931011 8 log.go:172] (0xc0023be9a0) Data frame received for 1 I0526 23:42:20.931048 8 log.go:172] (0xc00200f2c0) (1) Data frame handling I0526 23:42:20.931084 8 log.go:172] (0xc00200f2c0) (1) Data frame sent I0526 23:42:20.931111 8 log.go:172] (0xc0023be9a0) (0xc00200f2c0) Stream removed, broadcasting: 1 I0526 23:42:20.931209 8 log.go:172] (0xc0023be9a0) Go away received I0526 23:42:20.931711 8 log.go:172] (0xc0023be9a0) (0xc00200f2c0) Stream removed, broadcasting: 1 I0526 23:42:20.931737 8 log.go:172] (0xc0023be9a0) (0xc001b36fa0) Stream removed, broadcasting: 3 I0526 23:42:20.931755 8 log.go:172] (0xc0023be9a0) (0xc00200f360) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 26 23:42:20.935: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1070 PodName:var-expansion-c89d1597-a30e-4b50-8fcd-ceb5dffef1b9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 23:42:20.935: INFO: >>> kubeConfig: /root/.kube/config I0526 23:42:20.970730 8 log.go:172] (0xc0023bf130) (0xc00200f9a0) Create stream I0526 23:42:20.970756 8 log.go:172] (0xc0023bf130) (0xc00200f9a0) Stream added, broadcasting: 1 I0526 23:42:20.974127 8 log.go:172] (0xc0023bf130) Reply frame received for 1 I0526 23:42:20.974169 8 log.go:172] (0xc0023bf130) (0xc00200fa40) Create stream I0526 23:42:20.974187 8 log.go:172] (0xc0023bf130) (0xc00200fa40) Stream added, broadcasting: 3 I0526 23:42:20.975278 8 log.go:172] (0xc0023bf130) Reply frame received for 3 I0526 23:42:20.975316 8 log.go:172] (0xc0023bf130) (0xc00255b4a0) Create stream I0526 23:42:20.975330 8 log.go:172] (0xc0023bf130) (0xc00255b4a0) Stream added, broadcasting: 5 I0526 23:42:20.976332 8 log.go:172] (0xc0023bf130) Reply frame received for 5 I0526 23:42:21.050563 8 log.go:172] (0xc0023bf130) Data frame received for 5 I0526 23:42:21.050591 8 log.go:172] (0xc00255b4a0) (5) Data frame handling I0526 23:42:21.050608 8 log.go:172] (0xc0023bf130) Data frame received for 3 I0526 23:42:21.050620 8 log.go:172] (0xc00200fa40) (3) Data frame handling I0526 23:42:21.051848 8 log.go:172] (0xc0023bf130) Data frame received for 1 I0526 23:42:21.051873 8 log.go:172] (0xc00200f9a0) (1) Data frame handling I0526 23:42:21.051890 8 log.go:172] (0xc00200f9a0) (1) Data frame sent I0526 23:42:21.051940 8 log.go:172] (0xc0023bf130) (0xc00200f9a0) Stream removed, broadcasting: 1 I0526 23:42:21.052047 8 log.go:172] (0xc0023bf130) Go away received I0526 23:42:21.052102 8 log.go:172] (0xc0023bf130) (0xc00200f9a0) Stream removed, broadcasting: 1 I0526 23:42:21.052147 8 log.go:172] (0xc0023bf130) (0xc00200fa40) Stream removed, broadcasting: 3 I0526 23:42:21.052161 8 log.go:172] (0xc0023bf130) (0xc00255b4a0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 26 23:42:21.598: INFO: Successfully updated pod "var-expansion-c89d1597-a30e-4b50-8fcd-ceb5dffef1b9" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 26 23:42:21.620: INFO: Deleting pod "var-expansion-c89d1597-a30e-4b50-8fcd-ceb5dffef1b9" in namespace "var-expansion-1070" May 26 23:42:21.641: INFO: Wait up to 5m0s for pod "var-expansion-c89d1597-a30e-4b50-8fcd-ceb5dffef1b9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:43:05.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1070" for this suite. • [SLOW TEST:49.310 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":27,"skipped":590,"failed":0} [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:43:05.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0526 23:43:46.478937 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 23:43:46.478: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:43:46.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4367" for this suite. • [SLOW TEST:40.769 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":28,"skipped":590,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:43:46.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-699d3fd3-527e-4272-b82c-11c992d4f565 STEP: Creating a pod to test consume secrets May 26 23:43:46.627: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c" in namespace "projected-4754" to be "Succeeded or Failed" May 26 23:43:46.698: INFO: Pod "pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 70.800029ms May 26 23:43:48.703: INFO: Pod "pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075584329s May 26 23:43:50.707: INFO: Pod "pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c": Phase="Running", Reason="", readiness=true. Elapsed: 4.079478721s May 26 23:43:52.854: INFO: Pod "pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.226751801s STEP: Saw pod success May 26 23:43:52.854: INFO: Pod "pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c" satisfied condition "Succeeded or Failed" May 26 23:43:53.058: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c container projected-secret-volume-test: STEP: delete the pod May 26 23:43:53.140: INFO: Waiting for pod pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c to disappear May 26 23:43:53.464: INFO: Pod pod-projected-secrets-40cd9714-440f-4ca6-a081-8d9fa11d3d7c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:43:53.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4754" for this suite. • [SLOW TEST:7.201 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":591,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:43:53.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4989 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-4989 May 26 23:43:54.405: INFO: Found 0 stateful pods, waiting for 1 May 26 23:44:04.410: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 26 23:44:04.451: INFO: Deleting all statefulset in ns statefulset-4989 May 26 23:44:04.476: INFO: Scaling statefulset ss to 0 May 26 23:44:24.590: INFO: Waiting for statefulset status.replicas updated to 0 May 26 23:44:24.593: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:44:24.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4989" for this suite. • [SLOW TEST:30.961 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":30,"skipped":604,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:44:24.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-42bb33ad-4dcd-408b-9574-61b7a780078b STEP: Creating a pod to test consume secrets May 26 23:44:24.784: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-76a96ac0-6e7d-4449-8fba-97a60e247b9a" in namespace "projected-5781" to be "Succeeded or Failed" May 26 23:44:24.787: INFO: Pod "pod-projected-secrets-76a96ac0-6e7d-4449-8fba-97a60e247b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.410644ms May 26 23:44:26.791: INFO: Pod "pod-projected-secrets-76a96ac0-6e7d-4449-8fba-97a60e247b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007604904s May 26 23:44:28.809: INFO: Pod "pod-projected-secrets-76a96ac0-6e7d-4449-8fba-97a60e247b9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02535067s STEP: Saw pod success May 26 23:44:28.809: INFO: Pod "pod-projected-secrets-76a96ac0-6e7d-4449-8fba-97a60e247b9a" satisfied condition "Succeeded or Failed" May 26 23:44:28.813: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-76a96ac0-6e7d-4449-8fba-97a60e247b9a container projected-secret-volume-test: STEP: delete the pod May 26 23:44:28.873: INFO: Waiting for pod pod-projected-secrets-76a96ac0-6e7d-4449-8fba-97a60e247b9a to disappear May 26 23:44:28.883: INFO: Pod pod-projected-secrets-76a96ac0-6e7d-4449-8fba-97a60e247b9a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:44:28.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5781" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":618,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:44:28.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 26 23:44:28.938: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 23:44:28.970: INFO: Waiting for terminating namespaces to be deleted... May 26 23:44:29.123: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 26 23:44:29.177: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 26 23:44:29.177: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 26 23:44:29.177: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 26 23:44:29.177: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 26 23:44:29.177: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 23:44:29.177: INFO: Container kindnet-cni ready: true, restart count 2 May 26 23:44:29.177: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 26 23:44:29.177: INFO: Container kube-proxy ready: true, restart count 0 May 26 23:44:29.177: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 26 23:44:29.182: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 26 23:44:29.182: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 26 23:44:29.182: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 26 23:44:29.182: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 26 23:44:29.182: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 23:44:29.182: INFO: Container kindnet-cni ready: true, restart count 2 May 26 23:44:29.182: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 26 23:44:29.182: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b6f73578-ea84-4bcc-9cc8-5b1b39c0bce0 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-b6f73578-ea84-4bcc-9cc8-5b1b39c0bce0 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b6f73578-ea84-4bcc-9cc8-5b1b39c0bce0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:49:37.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-307" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.565 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":32,"skipped":633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:49:37.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 26 23:49:37.595: INFO: Waiting up to 5m0s for pod "client-containers-03f9c9ed-6ab5-45f3-ab98-323207ef1152" in namespace "containers-3392" to be "Succeeded or Failed" May 26 23:49:37.620: INFO: Pod "client-containers-03f9c9ed-6ab5-45f3-ab98-323207ef1152": Phase="Pending", Reason="", readiness=false. Elapsed: 25.467646ms May 26 23:49:39.624: INFO: Pod "client-containers-03f9c9ed-6ab5-45f3-ab98-323207ef1152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029429663s May 26 23:49:41.629: INFO: Pod "client-containers-03f9c9ed-6ab5-45f3-ab98-323207ef1152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034803017s STEP: Saw pod success May 26 23:49:41.630: INFO: Pod "client-containers-03f9c9ed-6ab5-45f3-ab98-323207ef1152" satisfied condition "Succeeded or Failed" May 26 23:49:41.633: INFO: Trying to get logs from node latest-worker2 pod client-containers-03f9c9ed-6ab5-45f3-ab98-323207ef1152 container test-container: STEP: delete the pod May 26 23:49:41.805: INFO: Waiting for pod client-containers-03f9c9ed-6ab5-45f3-ab98-323207ef1152 to disappear May 26 23:49:41.814: INFO: Pod client-containers-03f9c9ed-6ab5-45f3-ab98-323207ef1152 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:49:41.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3392" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":33,"skipped":658,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:49:41.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-e8b34025-081f-4d75-bb38-b6b80c63f007 in namespace container-probe-3603 May 26 23:49:45.970: INFO: Started pod liveness-e8b34025-081f-4d75-bb38-b6b80c63f007 in namespace container-probe-3603 STEP: checking the pod's current state and verifying that restartCount is present May 26 23:49:45.992: INFO: Initial restart count of pod liveness-e8b34025-081f-4d75-bb38-b6b80c63f007 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:53:46.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3603" for this suite. • [SLOW TEST:244.852 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":675,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:53:46.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-czqv STEP: Creating a pod to test atomic-volume-subpath May 26 23:53:47.170: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-czqv" in namespace "subpath-5136" to be "Succeeded or Failed" May 26 23:53:47.195: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Pending", Reason="", readiness=false. Elapsed: 24.996253ms May 26 23:53:49.200: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02955464s May 26 23:53:51.204: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 4.033922224s May 26 23:53:53.209: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 6.038893729s May 26 23:53:55.214: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 8.043802619s May 26 23:53:57.219: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 10.048726495s May 26 23:53:59.223: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 12.052868013s May 26 23:54:01.227: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 14.056471976s May 26 23:54:03.231: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 16.060857781s May 26 23:54:05.235: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 18.065051303s May 26 23:54:07.239: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 20.069060332s May 26 23:54:09.244: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 22.073209491s May 26 23:54:11.355: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Running", Reason="", readiness=true. Elapsed: 24.184175306s May 26 23:54:13.359: INFO: Pod "pod-subpath-test-configmap-czqv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.188399822s STEP: Saw pod success May 26 23:54:13.359: INFO: Pod "pod-subpath-test-configmap-czqv" satisfied condition "Succeeded or Failed" May 26 23:54:13.362: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-czqv container test-container-subpath-configmap-czqv: STEP: delete the pod May 26 23:54:13.413: INFO: Waiting for pod pod-subpath-test-configmap-czqv to disappear May 26 23:54:13.419: INFO: Pod pod-subpath-test-configmap-czqv no longer exists STEP: Deleting pod pod-subpath-test-configmap-czqv May 26 23:54:13.419: INFO: Deleting pod "pod-subpath-test-configmap-czqv" in namespace "subpath-5136" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:54:13.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5136" for this suite. • [SLOW TEST:26.742 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":35,"skipped":689,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:54:13.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 23:54:15.890: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 23:54:18.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134055, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134055, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134056, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134055, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 23:54:21.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 23:54:21.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:54:22.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5252" for this suite. STEP: Destroying namespace "webhook-5252-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.991 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":36,"skipped":723,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:54:22.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 26 23:54:22.510: INFO: Waiting up to 5m0s for pod "pod-bdd94b46-504c-4048-a890-5f9dd8eef033" in namespace "emptydir-2863" to be "Succeeded or Failed" May 26 23:54:22.514: INFO: Pod "pod-bdd94b46-504c-4048-a890-5f9dd8eef033": Phase="Pending", Reason="", readiness=false. Elapsed: 3.528357ms May 26 23:54:24.518: INFO: Pod "pod-bdd94b46-504c-4048-a890-5f9dd8eef033": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007690202s May 26 23:54:26.523: INFO: Pod "pod-bdd94b46-504c-4048-a890-5f9dd8eef033": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01268283s STEP: Saw pod success May 26 23:54:26.523: INFO: Pod "pod-bdd94b46-504c-4048-a890-5f9dd8eef033" satisfied condition "Succeeded or Failed" May 26 23:54:26.526: INFO: Trying to get logs from node latest-worker2 pod pod-bdd94b46-504c-4048-a890-5f9dd8eef033 container test-container: STEP: delete the pod May 26 23:54:26.579: INFO: Waiting for pod pod-bdd94b46-504c-4048-a890-5f9dd8eef033 to disappear May 26 23:54:26.594: INFO: Pod pod-bdd94b46-504c-4048-a890-5f9dd8eef033 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:54:26.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2863" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":723,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:54:26.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-423 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 26 23:54:26.729: INFO: Found 0 stateful pods, waiting for 3 May 26 23:54:36.925: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 23:54:36.925: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 23:54:36.925: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 26 23:54:46.739: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 23:54:46.739: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 23:54:46.739: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 26 23:54:46.766: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 26 23:54:56.897: INFO: Updating stateful set ss2 May 26 23:54:56.941: INFO: Waiting for Pod statefulset-423/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 26 23:55:07.992: INFO: Found 2 stateful pods, waiting for 3 May 26 23:55:17.998: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 23:55:17.998: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 23:55:17.998: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 26 23:55:18.022: INFO: Updating stateful set ss2 May 26 23:55:18.085: INFO: Waiting for Pod statefulset-423/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 26 23:55:28.113: INFO: Updating stateful set ss2 May 26 23:55:28.155: INFO: Waiting for StatefulSet statefulset-423/ss2 to complete update May 26 23:55:28.155: INFO: Waiting for Pod statefulset-423/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 26 23:55:38.163: INFO: Waiting for StatefulSet statefulset-423/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 26 23:55:48.163: INFO: Deleting all statefulset in ns statefulset-423 May 26 23:55:48.165: INFO: Scaling statefulset ss2 to 0 May 26 23:56:08.183: INFO: Waiting for statefulset status.replicas updated to 0 May 26 23:56:08.186: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:56:08.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-423" for this suite. • [SLOW TEST:101.606 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":38,"skipped":742,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:56:08.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-5161e06b-3150-448a-b360-ece2e46f4b5a STEP: Creating secret with name s-test-opt-upd-9e041473-86b3-4492-be00-fbfbce0d385e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5161e06b-3150-448a-b360-ece2e46f4b5a STEP: Updating secret s-test-opt-upd-9e041473-86b3-4492-be00-fbfbce0d385e STEP: Creating secret with name s-test-opt-create-ec6d3aa4-74b8-41b4-b077-1ba27e6763e2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:56:16.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-757" for this suite. • [SLOW TEST:8.314 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:56:16.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0526 23:56:17.337670 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 23:56:17.337: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:56:17.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1529" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":40,"skipped":798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:56:17.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 26 23:56:17.436: INFO: Waiting up to 5m0s for pod "var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9" in namespace "var-expansion-3980" to be "Succeeded or Failed" May 26 23:56:17.468: INFO: Pod "var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.738588ms May 26 23:56:20.916: INFO: Pod "var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.480487146s May 26 23:56:23.136: INFO: Pod "var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.699824087s May 26 23:56:25.213: INFO: Pod "var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.776922154s STEP: Saw pod success May 26 23:56:25.213: INFO: Pod "var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9" satisfied condition "Succeeded or Failed" May 26 23:56:25.215: INFO: Trying to get logs from node latest-worker2 pod var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9 container dapi-container: STEP: delete the pod May 26 23:56:25.453: INFO: Waiting for pod var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9 to disappear May 26 23:56:25.776: INFO: Pod var-expansion-1413e4c8-cc6b-4f4e-aeb9-4f46fc2e42f9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:56:25.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3980" for this suite. • [SLOW TEST:8.720 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":864,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:56:26.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 26 23:56:26.427: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54ee1c1f-74ba-4d18-8c37-0ddcae45ca38" in namespace "downward-api-849" to be "Succeeded or Failed" May 26 23:56:26.447: INFO: Pod "downwardapi-volume-54ee1c1f-74ba-4d18-8c37-0ddcae45ca38": Phase="Pending", Reason="", readiness=false. Elapsed: 19.836575ms May 26 23:56:28.471: INFO: Pod "downwardapi-volume-54ee1c1f-74ba-4d18-8c37-0ddcae45ca38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044297724s May 26 23:56:30.475: INFO: Pod "downwardapi-volume-54ee1c1f-74ba-4d18-8c37-0ddcae45ca38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048246885s STEP: Saw pod success May 26 23:56:30.475: INFO: Pod "downwardapi-volume-54ee1c1f-74ba-4d18-8c37-0ddcae45ca38" satisfied condition "Succeeded or Failed" May 26 23:56:30.478: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-54ee1c1f-74ba-4d18-8c37-0ddcae45ca38 container client-container: STEP: delete the pod May 26 23:56:30.641: INFO: Waiting for pod downwardapi-volume-54ee1c1f-74ba-4d18-8c37-0ddcae45ca38 to disappear May 26 23:56:30.646: INFO: Pod downwardapi-volume-54ee1c1f-74ba-4d18-8c37-0ddcae45ca38 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:56:30.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-849" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":42,"skipped":881,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:56:30.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 23:56:30.744: INFO: Creating deployment "webserver-deployment" May 26 23:56:30.750: INFO: Waiting for observed generation 1 May 26 23:56:32.872: INFO: Waiting for all required pods to come up May 26 23:56:33.023: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 26 23:56:43.038: INFO: Waiting for deployment "webserver-deployment" to complete May 26 23:56:43.043: INFO: Updating deployment "webserver-deployment" with a non-existent image May 26 23:56:43.048: INFO: Updating deployment webserver-deployment May 26 23:56:43.049: INFO: Waiting for observed generation 2 May 26 23:56:45.303: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 26 23:56:45.306: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 26 23:56:45.308: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 26 23:56:45.315: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 26 23:56:45.315: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 26 23:56:45.316: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 26 23:56:45.321: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 26 23:56:45.321: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 26 23:56:45.326: INFO: Updating deployment webserver-deployment May 26 23:56:45.326: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 26 23:56:45.622: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 26 23:56:46.185: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 26 23:56:49.460: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1799 /apis/apps/v1/namespaces/deployment-1799/deployments/webserver-deployment 38a4e310-d2a1-4d17-8dce-07dfe90df1b3 7940386 3 2020-05-26 23:56:30 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-26 23:56:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038071f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-26 23:56:45 +0000 UTC,LastTransitionTime:2020-05-26 23:56:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-26 23:56:46 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 26 23:56:53.197: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-1799 /apis/apps/v1/namespaces/deployment-1799/replicasets/webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 7940373 3 2020-05-26 23:56:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 38a4e310-d2a1-4d17-8dce-07dfe90df1b3 0xc003807667 0xc003807668}] [] [{kube-controller-manager Update apps/v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38a4e310-d2a1-4d17-8dce-07dfe90df1b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038076e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 23:56:53.197: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 26 23:56:53.197: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-1799 /apis/apps/v1/namespaces/deployment-1799/replicasets/webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 7940383 3 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 38a4e310-d2a1-4d17-8dce-07dfe90df1b3 0xc003807747 0xc003807748}] [] [{kube-controller-manager Update apps/v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38a4e310-d2a1-4d17-8dce-07dfe90df1b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038077b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 26 23:56:54.131: INFO: Pod "webserver-deployment-6676bcd6d4-4hmwm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4hmwm webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-4hmwm c474ced1-0fea-4223-bc98-e68e492bcb40 7940285 0 2020-05-26 23:56:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc003807cf7 0xc003807cf8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.131: INFO: Pod "webserver-deployment-6676bcd6d4-4xfjs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4xfjs webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-4xfjs 90ee917c-59b7-4e08-8147-472c1e6c2313 7940425 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc003807ea7 0xc003807ea8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.132: INFO: Pod "webserver-deployment-6676bcd6d4-5nw2z" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5nw2z webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-5nw2z b32158e0-4fa9-4ef5-8ca6-c5258d06e2b7 7940431 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc0023780c7 0xc0023780c8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.132: INFO: Pod "webserver-deployment-6676bcd6d4-6gj7m" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6gj7m webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-6gj7m 438c9d8a-3618-42d5-8046-da6bfce091fe 7940433 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc002378507 0xc002378508}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.132: INFO: Pod "webserver-deployment-6676bcd6d4-75qqm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-75qqm webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-75qqm 0a3ac465-ea4a-4a44-8601-7582161665be 7940380 0 2020-05-26 23:56:45 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc0023787c7 0xc0023787c8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.132: INFO: Pod "webserver-deployment-6676bcd6d4-b7qpz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b7qpz webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-b7qpz 49184932-3e2f-4707-aec3-8593eb187329 7940398 0 2020-05-26 23:56:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc002378987 0xc002378988}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.75\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.75,StartTime:2020-05-26 23:56:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.132: INFO: Pod "webserver-deployment-6676bcd6d4-bp4gw" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bp4gw webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-bp4gw 0da17c3f-d75c-416f-80a2-4d73ce896ef4 7940439 0 2020-05-26 23:56:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc002378b67 0xc002378b68}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.90\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.90,StartTime:2020-05-26 23:56:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.133: INFO: Pod "webserver-deployment-6676bcd6d4-hhsl8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hhsl8 webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-hhsl8 bc133dd5-6717-467b-a1ad-7c7d92a59211 7940389 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc002378d47 0xc002378d48}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.133: INFO: Pod "webserver-deployment-6676bcd6d4-kr2wn" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kr2wn webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-kr2wn 5597c1d9-2452-4fba-a303-5e251debe0d9 7940278 0 2020-05-26 23:56:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc002378ef7 0xc002378ef8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.133: INFO: Pod "webserver-deployment-6676bcd6d4-l7vrv" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l7vrv webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-l7vrv dbe21321-f1be-41a7-bc65-b49fccb881c4 7940440 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc0023790a7 0xc0023790a8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.133: INFO: Pod "webserver-deployment-6676bcd6d4-nrntf" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nrntf webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-nrntf bed996c6-8a5c-4ef5-a3fb-5d48d2c34f95 7940403 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc002379277 0xc002379278}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.133: INFO: Pod "webserver-deployment-6676bcd6d4-qfxv8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qfxv8 webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-qfxv8 5c9cd9e4-50ff-455a-bc49-c98eaa17ef22 7940427 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc002379427 0xc002379428}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.134: INFO: Pod "webserver-deployment-6676bcd6d4-tqpkx" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tqpkx webserver-deployment-6676bcd6d4- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-6676bcd6d4-tqpkx 24bed8bf-1724-455f-9a6f-a75cec06f014 7940284 0 2020-05-26 23:56:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2 0xc0023795d7 0xc0023795d8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1f1a3c7-bc91-4350-a7fa-3e0832e4bcd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.134: INFO: Pod "webserver-deployment-84855cf797-4dr9c" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4dr9c webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-4dr9c 096c8ec8-6470-4d89-8f68-c21f6c01fdb1 7940391 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002379787 0xc002379788}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.134: INFO: Pod "webserver-deployment-84855cf797-4zfcj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4zfcj webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-4zfcj 0f266519-b035-462f-9600-153c6250bb0c 7940405 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002379917 0xc002379918}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.134: INFO: Pod "webserver-deployment-84855cf797-75jwp" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-75jwp webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-75jwp 4ec0f3fb-2827-42fe-a402-51c1786afdc0 7940410 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002379aa7 0xc002379aa8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.134: INFO: Pod "webserver-deployment-84855cf797-84hw6" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-84hw6 webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-84hw6 97484ecd-c70c-45d3-91ab-23e90b975526 7940197 0 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002379c37 0xc002379c38}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.86\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.86,StartTime:2020-05-26 23:56:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:56:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7f9eec66f9ef08027dffbaee94a46b271f78ae1e57683a14f7a315cb57908f23,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.134: INFO: Pod "webserver-deployment-84855cf797-cskmx" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cskmx webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-cskmx 65a25bc8-fcf2-42fc-b9cf-f5491fb49186 7940217 0 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002208007 0xc002208008}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.74,StartTime:2020-05-26 23:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:56:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://076142b31aac5c07f5e346e9f6ff0c1b2249bf780531d8b133c8bc8d90aab177,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.134: INFO: Pod "webserver-deployment-84855cf797-czb48" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-czb48 webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-czb48 1658b424-5da8-4383-bbca-e319e05a26e9 7940397 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc0022081b7 0xc0022081b8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.135: INFO: Pod "webserver-deployment-84855cf797-dfk4l" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dfk4l webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-dfk4l 5a49e756-2212-470a-9274-9845fb826805 7940387 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002208347 0xc002208348}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.135: INFO: Pod "webserver-deployment-84855cf797-gsgdq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gsgdq webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-gsgdq 0c1f1aad-f0bb-4f76-99ea-ace9532c4317 7940418 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc0022084d7 0xc0022084d8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.135: INFO: Pod "webserver-deployment-84855cf797-hdvqq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hdvqq webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-hdvqq 83f7c0d2-5a25-4205-b76d-33bcd8dd6507 7940215 0 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002208667 0xc002208668}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.72\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.72,StartTime:2020-05-26 23:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:56:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://30872b8b3c3a9c6a5b1e27bd8c18523c561865d8c204929a25c65df0271e96be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.135: INFO: Pod "webserver-deployment-84855cf797-hgwg4" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hgwg4 webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-hgwg4 d453b694-128a-4940-a833-84ecd4b8b27f 7940184 0 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002208817 0xc002208818}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.85\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.85,StartTime:2020-05-26 23:56:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:56:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7daeae06b9a7c89b7bbdbf4fbaa3ea1c2a294f84dcbe61efe3214c2b6b4b8a0f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.135: INFO: Pod "webserver-deployment-84855cf797-lh7qx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lh7qx webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-lh7qx b243a06d-a9af-491f-8fa9-cac704d0ad04 7940394 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc0022089c7 0xc0022089c8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.135: INFO: Pod "webserver-deployment-84855cf797-lpm6z" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lpm6z webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-lpm6z 8f617cda-3cad-4d12-971b-7161c3fd99af 7940212 0 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002208b57 0xc002208b58}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.73,StartTime:2020-05-26 23:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:56:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a5ce34cf18b33aaa3662d5a79ba804e6ee1079940778e3289c73266c35f5d032,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.136: INFO: Pod "webserver-deployment-84855cf797-m5vs4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m5vs4 webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-m5vs4 9c5700fe-162b-4498-9bf1-d21142c8ca83 7940409 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002208d27 0xc002208d28}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.136: INFO: Pod "webserver-deployment-84855cf797-nv2jv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nv2jv webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-nv2jv beaa35b3-5485-4094-9b8a-40bdf5e27117 7940375 0 2020-05-26 23:56:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002208ed7 0xc002208ed8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.136: INFO: Pod "webserver-deployment-84855cf797-sns6t" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-sns6t webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-sns6t f563baaf-da9a-4985-9b30-64c9bdd1c61e 7940169 0 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002209087 0xc002209088}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.84,StartTime:2020-05-26 23:56:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:56:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0bd25628d3325ec2b65063f367410338010bbb674dc6d7c846ec335367629b15,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.136: INFO: Pod "webserver-deployment-84855cf797-tjk7c" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tjk7c webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-tjk7c 8fc73033-3045-47fc-b23f-aee76cc5dd52 7940384 0 2020-05-26 23:56:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002209257 0xc002209258}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.136: INFO: Pod "webserver-deployment-84855cf797-vfjb5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vfjb5 webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-vfjb5 d2b99337-9851-45d1-8bce-188eb07e5635 7940421 0 2020-05-26 23:56:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002209437 0xc002209438}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.136: INFO: Pod "webserver-deployment-84855cf797-vm2qj" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vm2qj webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-vm2qj 30a27079-473e-49da-bb7c-d70a6babcb4e 7940221 0 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc0022095c7 0xc0022095c8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.89\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.89,StartTime:2020-05-26 23:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:56:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://24e19fe840041d39c426ec66ce91d0183b8e68cdf743947af5d20a180032ddf5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.137: INFO: Pod "webserver-deployment-84855cf797-wd88w" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wd88w webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-wd88w 20ad4f79-c1e2-4a82-90d9-00417fb7adc9 7940365 0 2020-05-26 23:56:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc0022097a7 0xc0022097a8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-26 23:56:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:56:54.137: INFO: Pod "webserver-deployment-84855cf797-wwtdr" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wwtdr webserver-deployment-84855cf797- deployment-1799 /api/v1/namespaces/deployment-1799/pods/webserver-deployment-84855cf797-wwtdr b886fd12-d86b-4761-b745-e91b5c19f16d 7940185 0 2020-05-26 23:56:30 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e80f6757-b4fc-4a55-b42c-2c8e4654c2d5 0xc002209cd7 0xc002209cd8}] [] [{kube-controller-manager Update v1 2020-05-26 23:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e80f6757-b4fc-4a55-b42c-2c8e4654c2d5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:56:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8dw4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8dw4d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8dw4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:56:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.71,StartTime:2020-05-26 23:56:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:56:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aa56868ff83bbdc1df1422f70a8843ddb5e7f6ee3362c27f9cf1b776b202b638,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:56:54.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1799" for this suite. • [SLOW TEST:24.110 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":43,"skipped":900,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:56:54.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 26 23:56:55.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 26 23:57:15.347: INFO: stderr: "" May 26 23:57:15.347: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:57:15.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1542" for this suite. • [SLOW TEST:21.291 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1057 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":44,"skipped":908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:57:16.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4647 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4647;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4647 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4647;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4647.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4647.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4647.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4647.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4647.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4647.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4647.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4647.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4647.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4647.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4647.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 131.112.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.112.131_udp@PTR;check="$$(dig +tcp +noall +answer +search 131.112.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.112.131_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4647 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4647;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4647 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4647;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4647.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4647.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4647.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4647.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4647.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4647.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4647.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4647.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4647.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4647.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4647.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4647.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 131.112.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.112.131_udp@PTR;check="$$(dig +tcp +noall +answer +search 131.112.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.112.131_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 23:57:31.013: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.040: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.049: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.054: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.071: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.150: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.230: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.323: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.329: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.341: INFO: Unable to read jessie_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.368: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.381: INFO: Unable to read jessie_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.423: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.429: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:31.482: INFO: Lookups using dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4647 wheezy_tcp@dns-test-service.dns-4647 wheezy_udp@dns-test-service.dns-4647.svc wheezy_tcp@dns-test-service.dns-4647.svc wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4647 jessie_tcp@dns-test-service.dns-4647 jessie_udp@dns-test-service.dns-4647.svc jessie_tcp@dns-test-service.dns-4647.svc jessie_udp@_http._tcp.dns-test-service.dns-4647.svc jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc] May 26 23:57:36.488: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.492: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.495: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.499: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.502: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.505: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.508: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.511: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.533: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.536: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.539: INFO: Unable to read jessie_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.541: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.544: INFO: Unable to read jessie_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.547: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.550: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.554: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:36.575: INFO: Lookups using dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4647 wheezy_tcp@dns-test-service.dns-4647 wheezy_udp@dns-test-service.dns-4647.svc wheezy_tcp@dns-test-service.dns-4647.svc wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4647 jessie_tcp@dns-test-service.dns-4647 jessie_udp@dns-test-service.dns-4647.svc jessie_tcp@dns-test-service.dns-4647.svc jessie_udp@_http._tcp.dns-test-service.dns-4647.svc jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc] May 26 23:57:41.488: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.491: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.495: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.498: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.501: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.504: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.508: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.511: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.534: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.538: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.541: INFO: Unable to read jessie_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.544: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.548: INFO: Unable to read jessie_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.551: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.555: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.558: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:41.579: INFO: Lookups using dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4647 wheezy_tcp@dns-test-service.dns-4647 wheezy_udp@dns-test-service.dns-4647.svc wheezy_tcp@dns-test-service.dns-4647.svc wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4647 jessie_tcp@dns-test-service.dns-4647 jessie_udp@dns-test-service.dns-4647.svc jessie_tcp@dns-test-service.dns-4647.svc jessie_udp@_http._tcp.dns-test-service.dns-4647.svc jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc] May 26 23:57:46.488: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.492: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.496: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.500: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.505: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.508: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.512: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.515: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.538: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.541: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.544: INFO: Unable to read jessie_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.550: INFO: Unable to read jessie_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.553: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.556: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.559: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:46.580: INFO: Lookups using dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4647 wheezy_tcp@dns-test-service.dns-4647 wheezy_udp@dns-test-service.dns-4647.svc wheezy_tcp@dns-test-service.dns-4647.svc wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4647 jessie_tcp@dns-test-service.dns-4647 jessie_udp@dns-test-service.dns-4647.svc jessie_tcp@dns-test-service.dns-4647.svc jessie_udp@_http._tcp.dns-test-service.dns-4647.svc jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc] May 26 23:57:51.487: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.490: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.492: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.495: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.498: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.500: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.503: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.506: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.533: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.535: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.537: INFO: Unable to read jessie_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.539: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.542: INFO: Unable to read jessie_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.544: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.547: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.549: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:51.564: INFO: Lookups using dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4647 wheezy_tcp@dns-test-service.dns-4647 wheezy_udp@dns-test-service.dns-4647.svc wheezy_tcp@dns-test-service.dns-4647.svc wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4647 jessie_tcp@dns-test-service.dns-4647 jessie_udp@dns-test-service.dns-4647.svc jessie_tcp@dns-test-service.dns-4647.svc jessie_udp@_http._tcp.dns-test-service.dns-4647.svc jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc] May 26 23:57:56.488: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.532: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.536: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.551: INFO: Unable to read wheezy_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.555: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.558: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.561: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.584: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.587: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.590: INFO: Unable to read jessie_udp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.593: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647 from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.596: INFO: Unable to read jessie_udp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.599: INFO: Unable to read jessie_tcp@dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc from pod dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6: the server could not find the requested resource (get pods dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6) May 26 23:57:56.625: INFO: Lookups using dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4647 wheezy_tcp@dns-test-service.dns-4647 wheezy_udp@dns-test-service.dns-4647.svc wheezy_tcp@dns-test-service.dns-4647.svc wheezy_udp@_http._tcp.dns-test-service.dns-4647.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4647.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4647 jessie_tcp@dns-test-service.dns-4647 jessie_udp@dns-test-service.dns-4647.svc jessie_tcp@dns-test-service.dns-4647.svc jessie_udp@_http._tcp.dns-test-service.dns-4647.svc jessie_tcp@_http._tcp.dns-test-service.dns-4647.svc] May 26 23:58:02.534: INFO: DNS probes using dns-4647/dns-test-2a916986-2c8f-4776-bb2d-b45601a1a2a6 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:03.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4647" for this suite. • [SLOW TEST:47.457 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":45,"skipped":965,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:03.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 23:58:04.641: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 23:58:07.292: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134284, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134284, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134285, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134284, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 23:58:09.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134284, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134284, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134285, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134284, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 23:58:12.334: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:13.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3451" for this suite. STEP: Destroying namespace "webhook-3451-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.731 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":46,"skipped":965,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:13.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 26 23:58:13.375: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 26 23:58:13.913: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 26 23:58:16.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134294, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 23:58:18.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134294, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 23:58:20.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134294, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134293, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 23:58:22.855: INFO: Waited 625.232687ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:23.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3928" for this suite. • [SLOW TEST:10.654 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":47,"skipped":968,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:23.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-5cd132d9-bca4-4862-97ab-6009ab5cfe60 STEP: Creating a pod to test consume secrets May 26 23:58:24.274: INFO: Waiting up to 5m0s for pod "pod-secrets-1ce21240-f91f-468e-8899-4a80a0445b51" in namespace "secrets-6929" to be "Succeeded or Failed" May 26 23:58:24.651: INFO: Pod "pod-secrets-1ce21240-f91f-468e-8899-4a80a0445b51": Phase="Pending", Reason="", readiness=false. Elapsed: 377.220872ms May 26 23:58:26.656: INFO: Pod "pod-secrets-1ce21240-f91f-468e-8899-4a80a0445b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381900523s May 26 23:58:28.660: INFO: Pod "pod-secrets-1ce21240-f91f-468e-8899-4a80a0445b51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.386201031s STEP: Saw pod success May 26 23:58:28.660: INFO: Pod "pod-secrets-1ce21240-f91f-468e-8899-4a80a0445b51" satisfied condition "Succeeded or Failed" May 26 23:58:28.664: INFO: Trying to get logs from node latest-worker pod pod-secrets-1ce21240-f91f-468e-8899-4a80a0445b51 container secret-volume-test: STEP: delete the pod May 26 23:58:28.747: INFO: Waiting for pod pod-secrets-1ce21240-f91f-468e-8899-4a80a0445b51 to disappear May 26 23:58:28.784: INFO: Pod pod-secrets-1ce21240-f91f-468e-8899-4a80a0445b51 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:28.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6929" for this suite. • [SLOW TEST:5.043 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":48,"skipped":1008,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:28.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 26 23:58:29.197: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 26 23:58:34.200: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 23:58:34.200: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 26 23:58:34.239: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-792 /apis/apps/v1/namespaces/deployment-792/deployments/test-cleanup-deployment ddecf2b7-cb87-4a6c-a2a5-91858dd8d0e6 7941304 1 2020-05-26 23:58:34 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-26 23:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a028d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 26 23:58:34.336: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-792 /apis/apps/v1/namespaces/deployment-792/replicasets/test-cleanup-deployment-6688745694 4a0a8aa3-782d-45f1-bd7d-284963e8d331 7941306 1 2020-05-26 23:58:34 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment ddecf2b7-cb87-4a6c-a2a5-91858dd8d0e6 0xc0007c08c7 0xc0007c08c8}] [] [{kube-controller-manager Update apps/v1 2020-05-26 23:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ddecf2b7-cb87-4a6c-a2a5-91858dd8d0e6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0007c0c78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 23:58:34.336: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 26 23:58:34.336: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-792 /apis/apps/v1/namespaces/deployment-792/replicasets/test-cleanup-controller 7c7ef0d3-c9a2-4641-953c-860416db35bb 7941305 1 2020-05-26 23:58:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment ddecf2b7-cb87-4a6c-a2a5-91858dd8d0e6 0xc0007c06b7 0xc0007c06b8}] [] [{e2e.test Update apps/v1 2020-05-26 23:58:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-26 23:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"ddecf2b7-cb87-4a6c-a2a5-91858dd8d0e6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0007c0838 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 26 23:58:34.383: INFO: Pod "test-cleanup-controller-drfcs" is available: &Pod{ObjectMeta:{test-cleanup-controller-drfcs test-cleanup-controller- deployment-792 /api/v1/namespaces/deployment-792/pods/test-cleanup-controller-drfcs 746055f0-8163-4bf2-8259-d00052f3738b 7941289 0 2020-05-26 23:58:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 7c7ef0d3-c9a2-4641-953c-860416db35bb 0xc0022c6757 0xc0022c6758}] [] [{kube-controller-manager Update v1 2020-05-26 23:58:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c7ef0d3-c9a2-4641-953c-860416db35bb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-26 23:58:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.104\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lfgp6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lfgp6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lfgp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:58:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:58:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:58:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:58:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.104,StartTime:2020-05-26 23:58:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 23:58:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://896a5410c8e0b562292085d1a0cac54a6713205befde41dc47ea844e2bee1f8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 23:58:34.384: INFO: Pod "test-cleanup-deployment-6688745694-n6xkd" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-n6xkd test-cleanup-deployment-6688745694- deployment-792 /api/v1/namespaces/deployment-792/pods/test-cleanup-deployment-6688745694-n6xkd 80f2e0a8-18a2-44ca-a01a-947e99211155 7941311 0 2020-05-26 23:58:34 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 4a0a8aa3-782d-45f1-bd7d-284963e8d331 0xc0022c6b87 0xc0022c6b88}] [] [{kube-controller-manager Update v1 2020-05-26 23:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a0a8aa3-782d-45f1-bd7d-284963e8d331\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lfgp6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lfgp6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lfgp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 23:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:34.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-792" for this suite. • [SLOW TEST:5.550 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":49,"skipped":1023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:34.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:34.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8761" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":50,"skipped":1056,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:34.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 26 23:58:34.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3127' May 26 23:58:37.569: INFO: stderr: "" May 26 23:58:37.569: INFO: stdout: "pod/pause created\n" May 26 23:58:37.569: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 26 23:58:37.569: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3127" to be "running and ready" May 26 23:58:37.658: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 89.017806ms May 26 23:58:39.724: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155040559s May 26 23:58:41.727: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158326331s May 26 23:58:43.731: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.161986193s May 26 23:58:43.731: INFO: Pod "pause" satisfied condition "running and ready" May 26 23:58:43.731: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 26 23:58:43.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3127' May 26 23:58:43.836: INFO: stderr: "" May 26 23:58:43.836: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 26 23:58:43.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3127' May 26 23:58:43.940: INFO: stderr: "" May 26 23:58:43.940: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod May 26 23:58:43.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3127' May 26 23:58:44.089: INFO: stderr: "" May 26 23:58:44.089: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 26 23:58:44.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3127' May 26 23:58:44.199: INFO: stderr: "" May 26 23:58:44.199: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 26 23:58:44.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3127' May 26 23:58:44.332: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 23:58:44.332: INFO: stdout: "pod \"pause\" force deleted\n" May 26 23:58:44.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3127' May 26 23:58:44.431: INFO: stderr: "No resources found in kubectl-3127 namespace.\n" May 26 23:58:44.431: INFO: stdout: "" May 26 23:58:44.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3127 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 23:58:44.663: INFO: stderr: "" May 26 23:58:44.663: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:44.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3127" for this suite. • [SLOW TEST:9.936 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":51,"skipped":1067,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:44.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:44.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2844" for this suite. STEP: Destroying namespace "nspatchtest-8b35055c-ece2-4bf5-91ad-d2e154a11cff-5747" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":52,"skipped":1080,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:44.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 26 23:58:45.174: INFO: Waiting up to 5m0s for pod "pod-d7163035-8bfe-42ad-8a06-983e4c627ebf" in namespace "emptydir-4770" to be "Succeeded or Failed" May 26 23:58:45.183: INFO: Pod "pod-d7163035-8bfe-42ad-8a06-983e4c627ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.714519ms May 26 23:58:47.187: INFO: Pod "pod-d7163035-8bfe-42ad-8a06-983e4c627ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013531313s May 26 23:58:49.214: INFO: Pod "pod-d7163035-8bfe-42ad-8a06-983e4c627ebf": Phase="Running", Reason="", readiness=true. Elapsed: 4.04022566s May 26 23:58:51.225: INFO: Pod "pod-d7163035-8bfe-42ad-8a06-983e4c627ebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051429707s STEP: Saw pod success May 26 23:58:51.225: INFO: Pod "pod-d7163035-8bfe-42ad-8a06-983e4c627ebf" satisfied condition "Succeeded or Failed" May 26 23:58:51.227: INFO: Trying to get logs from node latest-worker2 pod pod-d7163035-8bfe-42ad-8a06-983e4c627ebf container test-container: STEP: delete the pod May 26 23:58:51.272: INFO: Waiting for pod pod-d7163035-8bfe-42ad-8a06-983e4c627ebf to disappear May 26 23:58:51.286: INFO: Pod pod-d7163035-8bfe-42ad-8a06-983e4c627ebf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:58:51.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4770" for this suite. • [SLOW TEST:6.359 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":1083,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:58:51.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-c0774118-4603-41ed-80a4-9e4c77d055d5 May 26 23:58:51.418: INFO: Pod name my-hostname-basic-c0774118-4603-41ed-80a4-9e4c77d055d5: Found 0 pods out of 1 May 26 23:58:56.421: INFO: Pod name my-hostname-basic-c0774118-4603-41ed-80a4-9e4c77d055d5: Found 1 pods out of 1 May 26 23:58:56.421: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c0774118-4603-41ed-80a4-9e4c77d055d5" are running May 26 23:58:56.424: INFO: Pod "my-hostname-basic-c0774118-4603-41ed-80a4-9e4c77d055d5-vgjqk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 23:58:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 23:58:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 23:58:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 23:58:51 +0000 UTC Reason: Message:}]) May 26 23:58:56.424: INFO: Trying to dial the pod May 26 23:59:01.437: INFO: Controller my-hostname-basic-c0774118-4603-41ed-80a4-9e4c77d055d5: Got expected result from replica 1 [my-hostname-basic-c0774118-4603-41ed-80a4-9e4c77d055d5-vgjqk]: "my-hostname-basic-c0774118-4603-41ed-80a4-9e4c77d055d5-vgjqk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:59:01.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3116" for this suite. • [SLOW TEST:10.151 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":54,"skipped":1088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:59:01.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 26 23:59:01.506: INFO: Waiting up to 5m0s for pod "client-containers-387ac074-2fba-4848-affb-1cf2d165d81f" in namespace "containers-8023" to be "Succeeded or Failed" May 26 23:59:01.525: INFO: Pod "client-containers-387ac074-2fba-4848-affb-1cf2d165d81f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.104093ms May 26 23:59:03.551: INFO: Pod "client-containers-387ac074-2fba-4848-affb-1cf2d165d81f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044180996s May 26 23:59:05.555: INFO: Pod "client-containers-387ac074-2fba-4848-affb-1cf2d165d81f": Phase="Running", Reason="", readiness=true. Elapsed: 4.048315827s May 26 23:59:07.562: INFO: Pod "client-containers-387ac074-2fba-4848-affb-1cf2d165d81f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055821123s STEP: Saw pod success May 26 23:59:07.562: INFO: Pod "client-containers-387ac074-2fba-4848-affb-1cf2d165d81f" satisfied condition "Succeeded or Failed" May 26 23:59:07.565: INFO: Trying to get logs from node latest-worker pod client-containers-387ac074-2fba-4848-affb-1cf2d165d81f container test-container: STEP: delete the pod May 26 23:59:07.654: INFO: Waiting for pod client-containers-387ac074-2fba-4848-affb-1cf2d165d81f to disappear May 26 23:59:07.676: INFO: Pod client-containers-387ac074-2fba-4848-affb-1cf2d165d81f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:59:07.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8023" for this suite. • [SLOW TEST:6.238 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":55,"skipped":1149,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:59:07.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 23:59:08.488: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 23:59:10.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134348, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134348, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134348, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134348, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 23:59:12.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134348, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134348, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134348, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134348, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 23:59:15.536: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:59:15.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2792" for this suite. STEP: Destroying namespace "webhook-2792-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.197 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":56,"skipped":1168,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:59:15.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-1878/secret-test-ee60a74f-67aa-424f-a2b1-a9bc6485372d STEP: Creating a pod to test consume secrets May 26 23:59:16.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-963649b9-c484-4292-bfcf-4296fc74ca90" in namespace "secrets-1878" to be "Succeeded or Failed" May 26 23:59:16.083: INFO: Pod "pod-configmaps-963649b9-c484-4292-bfcf-4296fc74ca90": Phase="Pending", Reason="", readiness=false. Elapsed: 71.838184ms May 26 23:59:18.102: INFO: Pod "pod-configmaps-963649b9-c484-4292-bfcf-4296fc74ca90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091484844s May 26 23:59:20.107: INFO: Pod "pod-configmaps-963649b9-c484-4292-bfcf-4296fc74ca90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095943647s STEP: Saw pod success May 26 23:59:20.107: INFO: Pod "pod-configmaps-963649b9-c484-4292-bfcf-4296fc74ca90" satisfied condition "Succeeded or Failed" May 26 23:59:20.110: INFO: Trying to get logs from node latest-worker pod pod-configmaps-963649b9-c484-4292-bfcf-4296fc74ca90 container env-test: STEP: delete the pod May 26 23:59:20.162: INFO: Waiting for pod pod-configmaps-963649b9-c484-4292-bfcf-4296fc74ca90 to disappear May 26 23:59:20.170: INFO: Pod pod-configmaps-963649b9-c484-4292-bfcf-4296fc74ca90 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:59:20.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1878" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":57,"skipped":1183,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:59:20.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 26 23:59:20.240: INFO: Waiting up to 5m0s for pod "pod-da6f8826-26ce-42eb-ad0b-90981748c5ac" in namespace "emptydir-9434" to be "Succeeded or Failed" May 26 23:59:20.251: INFO: Pod "pod-da6f8826-26ce-42eb-ad0b-90981748c5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 10.716486ms May 26 23:59:22.255: INFO: Pod "pod-da6f8826-26ce-42eb-ad0b-90981748c5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014190997s May 26 23:59:24.268: INFO: Pod "pod-da6f8826-26ce-42eb-ad0b-90981748c5ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027499761s STEP: Saw pod success May 26 23:59:24.268: INFO: Pod "pod-da6f8826-26ce-42eb-ad0b-90981748c5ac" satisfied condition "Succeeded or Failed" May 26 23:59:24.270: INFO: Trying to get logs from node latest-worker pod pod-da6f8826-26ce-42eb-ad0b-90981748c5ac container test-container: STEP: delete the pod May 26 23:59:24.304: INFO: Waiting for pod pod-da6f8826-26ce-42eb-ad0b-90981748c5ac to disappear May 26 23:59:24.314: INFO: Pod pod-da6f8826-26ce-42eb-ad0b-90981748c5ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:59:24.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9434" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":1185,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:59:24.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-69c6 STEP: Creating a pod to test atomic-volume-subpath May 26 23:59:24.466: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-69c6" in namespace "subpath-3164" to be "Succeeded or Failed" May 26 23:59:24.482: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.132379ms May 26 23:59:26.486: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020354249s May 26 23:59:28.490: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 4.024260484s May 26 23:59:30.494: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 6.027950544s May 26 23:59:32.497: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 8.031803818s May 26 23:59:34.501: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 10.035255234s May 26 23:59:36.510: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 12.044606681s May 26 23:59:38.514: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 14.048907185s May 26 23:59:40.519: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 16.053437791s May 26 23:59:42.524: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 18.058286765s May 26 23:59:44.528: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 20.062661772s May 26 23:59:46.534: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Running", Reason="", readiness=true. Elapsed: 22.067952679s May 26 23:59:48.538: INFO: Pod "pod-subpath-test-configmap-69c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.072753197s STEP: Saw pod success May 26 23:59:48.538: INFO: Pod "pod-subpath-test-configmap-69c6" satisfied condition "Succeeded or Failed" May 26 23:59:48.542: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-69c6 container test-container-subpath-configmap-69c6: STEP: delete the pod May 26 23:59:48.583: INFO: Waiting for pod pod-subpath-test-configmap-69c6 to disappear May 26 23:59:48.621: INFO: Pod pod-subpath-test-configmap-69c6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-69c6 May 26 23:59:48.621: INFO: Deleting pod "pod-subpath-test-configmap-69c6" in namespace "subpath-3164" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 26 23:59:48.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3164" for this suite. • [SLOW TEST:24.309 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":59,"skipped":1185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 23:59:48.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 26 23:59:48.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:48.823: INFO: Number of nodes with available pods: 0 May 26 23:59:48.823: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:49.829: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:49.832: INFO: Number of nodes with available pods: 0 May 26 23:59:49.833: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:50.976: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:51.263: INFO: Number of nodes with available pods: 0 May 26 23:59:51.264: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:51.828: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:51.832: INFO: Number of nodes with available pods: 0 May 26 23:59:51.832: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:52.827: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:52.830: INFO: Number of nodes with available pods: 2 May 26 23:59:52.830: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 26 23:59:52.869: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:52.873: INFO: Number of nodes with available pods: 1 May 26 23:59:52.873: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:53.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:53.880: INFO: Number of nodes with available pods: 1 May 26 23:59:53.880: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:54.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:54.882: INFO: Number of nodes with available pods: 1 May 26 23:59:54.882: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:55.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:55.988: INFO: Number of nodes with available pods: 1 May 26 23:59:55.988: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:56.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:56.883: INFO: Number of nodes with available pods: 1 May 26 23:59:56.883: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:57.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:57.879: INFO: Number of nodes with available pods: 1 May 26 23:59:57.879: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:58.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:58.880: INFO: Number of nodes with available pods: 1 May 26 23:59:58.880: INFO: Node latest-worker is running more than one daemon pod May 26 23:59:59.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 23:59:59.918: INFO: Number of nodes with available pods: 1 May 26 23:59:59.918: INFO: Node latest-worker is running more than one daemon pod May 27 00:00:00.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 00:00:00.884: INFO: Number of nodes with available pods: 1 May 27 00:00:00.884: INFO: Node latest-worker is running more than one daemon pod May 27 00:00:01.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 00:00:01.883: INFO: Number of nodes with available pods: 1 May 27 00:00:01.883: INFO: Node latest-worker is running more than one daemon pod May 27 00:00:02.880: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 00:00:02.884: INFO: Number of nodes with available pods: 1 May 27 00:00:02.884: INFO: Node latest-worker is running more than one daemon pod May 27 00:00:03.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 00:00:03.883: INFO: Number of nodes with available pods: 1 May 27 00:00:03.883: INFO: Node latest-worker is running more than one daemon pod May 27 00:00:04.905: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 00:00:04.940: INFO: Number of nodes with available pods: 1 May 27 00:00:04.940: INFO: Node latest-worker is running more than one daemon pod May 27 00:00:05.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 00:00:05.882: INFO: Number of nodes with available pods: 1 May 27 00:00:05.882: INFO: Node latest-worker is running more than one daemon pod May 27 00:00:06.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 00:00:06.883: INFO: Number of nodes with available pods: 1 May 27 00:00:06.883: INFO: Node latest-worker is running more than one daemon pod May 27 00:00:07.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 00:00:07.882: INFO: Number of nodes with available pods: 2 May 27 00:00:07.882: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7926, will wait for the garbage collector to delete the pods May 27 00:00:07.952: INFO: Deleting DaemonSet.extensions daemon-set took: 14.14595ms May 27 00:00:08.352: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.288797ms May 27 00:00:15.262: INFO: Number of nodes with available pods: 0 May 27 00:00:15.262: INFO: Number of running nodes: 0, number of available pods: 0 May 27 00:00:15.265: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7926/daemonsets","resourceVersion":"7941979"},"items":null} May 27 00:00:15.268: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7926/pods","resourceVersion":"7941979"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:00:15.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7926" for this suite. • [SLOW TEST:26.652 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":60,"skipped":1212,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:00:15.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 27 00:00:15.417: INFO: >>> kubeConfig: /root/.kube/config May 27 00:00:17.848: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:00:27.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7653" for this suite. • [SLOW TEST:12.687 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":61,"skipped":1212,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:00:27.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:00:32.149: INFO: Waiting up to 5m0s for pod "client-envvars-adb60b0e-be4e-435d-be42-cde0108f1bd0" in namespace "pods-8005" to be "Succeeded or Failed" May 27 00:00:32.168: INFO: Pod "client-envvars-adb60b0e-be4e-435d-be42-cde0108f1bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.052173ms May 27 00:00:34.222: INFO: Pod "client-envvars-adb60b0e-be4e-435d-be42-cde0108f1bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072251773s May 27 00:00:36.226: INFO: Pod "client-envvars-adb60b0e-be4e-435d-be42-cde0108f1bd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07665066s STEP: Saw pod success May 27 00:00:36.226: INFO: Pod "client-envvars-adb60b0e-be4e-435d-be42-cde0108f1bd0" satisfied condition "Succeeded or Failed" May 27 00:00:36.229: INFO: Trying to get logs from node latest-worker2 pod client-envvars-adb60b0e-be4e-435d-be42-cde0108f1bd0 container env3cont: STEP: delete the pod May 27 00:00:36.367: INFO: Waiting for pod client-envvars-adb60b0e-be4e-435d-be42-cde0108f1bd0 to disappear May 27 00:00:36.429: INFO: Pod client-envvars-adb60b0e-be4e-435d-be42-cde0108f1bd0 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:00:36.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8005" for this suite. • [SLOW TEST:8.473 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":62,"skipped":1229,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:00:36.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:00:36.536: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9380831f-90eb-4431-bf7d-df1ffb4d9884" in namespace "security-context-test-6494" to be "Succeeded or Failed" May 27 00:00:36.552: INFO: Pod "alpine-nnp-false-9380831f-90eb-4431-bf7d-df1ffb4d9884": Phase="Pending", Reason="", readiness=false. Elapsed: 16.556762ms May 27 00:00:38.568: INFO: Pod "alpine-nnp-false-9380831f-90eb-4431-bf7d-df1ffb4d9884": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03263646s May 27 00:00:40.572: INFO: Pod "alpine-nnp-false-9380831f-90eb-4431-bf7d-df1ffb4d9884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036633643s May 27 00:00:40.572: INFO: Pod "alpine-nnp-false-9380831f-90eb-4431-bf7d-df1ffb4d9884" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:00:40.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6494" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":1251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:00:40.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 27 00:00:40.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1336' May 27 00:00:40.874: INFO: stderr: "" May 27 00:00:40.874: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 27 00:00:45.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1336 -o json' May 27 00:00:46.049: INFO: stderr: "" May 27 00:00:46.049: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-27T00:00:40Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-27T00:00:40Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.112\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-27T00:00:44Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1336\",\n \"resourceVersion\": \"7942179\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1336/pods/e2e-test-httpd-pod\",\n \"uid\": \"914cd62b-7463-4002-aa64-160aaa0a6d04\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-m7vkc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-m7vkc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-m7vkc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-27T00:00:40Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-27T00:00:44Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-27T00:00:44Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-27T00:00:40Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f3f93c42cd6ffac55c4239259ffdc29479514817e267d1fcc67bd2c319c15890\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-27T00:00:43Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.112\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.112\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-27T00:00:40Z\"\n }\n}\n" STEP: replace the image in the pod May 27 00:00:46.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1336' May 27 00:00:46.386: INFO: stderr: "" May 27 00:00:46.386: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 27 00:00:46.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1336' May 27 00:00:55.314: INFO: stderr: "" May 27 00:00:55.314: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:00:55.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1336" for this suite. • [SLOW TEST:14.731 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":64,"skipped":1316,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:00:55.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0527 00:01:08.336163 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 27 00:01:08.336: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:01:08.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3661" for this suite. • [SLOW TEST:13.143 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":65,"skipped":1327,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:01:08.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:01:24.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6494" for this suite. • [SLOW TEST:16.122 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":66,"skipped":1343,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:01:24.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 27 00:01:29.997: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:01:30.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5379" for this suite. • [SLOW TEST:5.465 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":67,"skipped":1352,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:01:30.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:01:30.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd4b12c4-6f8a-4c0a-8acf-5188cdda96ce" in namespace "downward-api-3595" to be "Succeeded or Failed" May 27 00:01:30.204: INFO: Pod "downwardapi-volume-cd4b12c4-6f8a-4c0a-8acf-5188cdda96ce": Phase="Pending", Reason="", readiness=false. Elapsed: 68.29696ms May 27 00:01:32.208: INFO: Pod "downwardapi-volume-cd4b12c4-6f8a-4c0a-8acf-5188cdda96ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072704172s May 27 00:01:34.213: INFO: Pod "downwardapi-volume-cd4b12c4-6f8a-4c0a-8acf-5188cdda96ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077928084s STEP: Saw pod success May 27 00:01:34.213: INFO: Pod "downwardapi-volume-cd4b12c4-6f8a-4c0a-8acf-5188cdda96ce" satisfied condition "Succeeded or Failed" May 27 00:01:34.217: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cd4b12c4-6f8a-4c0a-8acf-5188cdda96ce container client-container: STEP: delete the pod May 27 00:01:34.248: INFO: Waiting for pod downwardapi-volume-cd4b12c4-6f8a-4c0a-8acf-5188cdda96ce to disappear May 27 00:01:34.265: INFO: Pod downwardapi-volume-cd4b12c4-6f8a-4c0a-8acf-5188cdda96ce no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:01:34.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3595" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":68,"skipped":1355,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:01:34.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 27 00:01:34.364: INFO: Waiting up to 5m0s for pod "downward-api-bce79e1a-377f-4e31-921e-c7696ee4d98a" in namespace "downward-api-364" to be "Succeeded or Failed" May 27 00:01:34.392: INFO: Pod "downward-api-bce79e1a-377f-4e31-921e-c7696ee4d98a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.934343ms May 27 00:01:36.396: INFO: Pod "downward-api-bce79e1a-377f-4e31-921e-c7696ee4d98a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032501074s May 27 00:01:38.402: INFO: Pod "downward-api-bce79e1a-377f-4e31-921e-c7696ee4d98a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038113076s STEP: Saw pod success May 27 00:01:38.402: INFO: Pod "downward-api-bce79e1a-377f-4e31-921e-c7696ee4d98a" satisfied condition "Succeeded or Failed" May 27 00:01:38.406: INFO: Trying to get logs from node latest-worker pod downward-api-bce79e1a-377f-4e31-921e-c7696ee4d98a container dapi-container: STEP: delete the pod May 27 00:01:38.470: INFO: Waiting for pod downward-api-bce79e1a-377f-4e31-921e-c7696ee4d98a to disappear May 27 00:01:38.473: INFO: Pod downward-api-bce79e1a-377f-4e31-921e-c7696ee4d98a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:01:38.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-364" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":1362,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:01:38.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1232.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1232.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1232.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1232.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1232.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1232.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1232.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1232.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1232.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1232.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 168.245.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.245.168_udp@PTR;check="$$(dig +tcp +noall +answer +search 168.245.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.245.168_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1232.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1232.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1232.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1232.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1232.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1232.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1232.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1232.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1232.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1232.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1232.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 168.245.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.245.168_udp@PTR;check="$$(dig +tcp +noall +answer +search 168.245.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.245.168_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 27 00:01:45.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:45.058: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:45.061: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:45.065: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:45.126: INFO: Unable to read jessie_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:45.128: INFO: Unable to read jessie_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:45.130: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:45.133: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:45.146: INFO: Lookups using dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95 failed for: [wheezy_udp@dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_udp@dns-test-service.dns-1232.svc.cluster.local jessie_tcp@dns-test-service.dns-1232.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local] May 27 00:01:50.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:50.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:50.158: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:50.160: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:50.184: INFO: Unable to read jessie_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:50.187: INFO: Unable to read jessie_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:50.189: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:50.192: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:50.212: INFO: Lookups using dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95 failed for: [wheezy_udp@dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_udp@dns-test-service.dns-1232.svc.cluster.local jessie_tcp@dns-test-service.dns-1232.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local] May 27 00:01:55.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:55.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:55.160: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:55.163: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:55.185: INFO: Unable to read jessie_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:55.189: INFO: Unable to read jessie_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:55.195: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:55.199: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:01:55.217: INFO: Lookups using dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95 failed for: [wheezy_udp@dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_udp@dns-test-service.dns-1232.svc.cluster.local jessie_tcp@dns-test-service.dns-1232.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local] May 27 00:02:00.151: INFO: Unable to read wheezy_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:00.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:00.159: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:00.162: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:00.185: INFO: Unable to read jessie_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:00.189: INFO: Unable to read jessie_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:00.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:00.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:00.218: INFO: Lookups using dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95 failed for: [wheezy_udp@dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_udp@dns-test-service.dns-1232.svc.cluster.local jessie_tcp@dns-test-service.dns-1232.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local] May 27 00:02:05.151: INFO: Unable to read wheezy_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:05.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:05.159: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:05.162: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:05.186: INFO: Unable to read jessie_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:05.190: INFO: Unable to read jessie_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:05.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:05.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:05.212: INFO: Lookups using dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95 failed for: [wheezy_udp@dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_udp@dns-test-service.dns-1232.svc.cluster.local jessie_tcp@dns-test-service.dns-1232.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local] May 27 00:02:10.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:10.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:10.159: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:10.163: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:10.186: INFO: Unable to read jessie_udp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:10.190: INFO: Unable to read jessie_tcp@dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:10.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:10.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local from pod dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95: the server could not find the requested resource (get pods dns-test-fe778a83-ba6e-499f-92ee-945785618b95) May 27 00:02:10.213: INFO: Lookups using dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95 failed for: [wheezy_udp@dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@dns-test-service.dns-1232.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_udp@dns-test-service.dns-1232.svc.cluster.local jessie_tcp@dns-test-service.dns-1232.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1232.svc.cluster.local] May 27 00:02:15.260: INFO: DNS probes using dns-1232/dns-test-fe778a83-ba6e-499f-92ee-945785618b95 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:02:16.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1232" for this suite. • [SLOW TEST:37.534 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":70,"skipped":1376,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:02:16.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 27 00:04:16.676: INFO: Successfully updated pod "var-expansion-2ba12bc8-3280-4e7f-9d14-c7b4da38cdea" STEP: waiting for pod running STEP: deleting the pod gracefully May 27 00:04:18.709: INFO: Deleting pod "var-expansion-2ba12bc8-3280-4e7f-9d14-c7b4da38cdea" in namespace "var-expansion-9185" May 27 00:04:18.714: INFO: Wait up to 5m0s for pod "var-expansion-2ba12bc8-3280-4e7f-9d14-c7b4da38cdea" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:04:56.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9185" for this suite. • [SLOW TEST:160.760 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":71,"skipped":1387,"failed":0} SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:04:56.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 27 00:04:56.852: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1330" to be "Succeeded or Failed" May 27 00:04:56.870: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.922096ms May 27 00:04:58.984: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131697702s May 27 00:05:00.989: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136227239s May 27 00:05:02.994: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141254966s STEP: Saw pod success May 27 00:05:02.994: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 27 00:05:02.997: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 27 00:05:03.052: INFO: Waiting for pod pod-host-path-test to disappear May 27 00:05:03.066: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:05:03.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1330" for this suite. • [SLOW TEST:6.297 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":72,"skipped":1389,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:05:03.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9072 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9072 STEP: Creating statefulset with conflicting port in namespace statefulset-9072 STEP: Waiting until pod test-pod will start running in namespace statefulset-9072 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9072 May 27 00:05:07.243: INFO: Observed stateful pod in namespace: statefulset-9072, name: ss-0, uid: d2ef42ca-6e7a-4f57-856e-60731cae3142, status phase: Pending. Waiting for statefulset controller to delete. May 27 00:05:07.807: INFO: Observed stateful pod in namespace: statefulset-9072, name: ss-0, uid: d2ef42ca-6e7a-4f57-856e-60731cae3142, status phase: Failed. Waiting for statefulset controller to delete. May 27 00:05:07.817: INFO: Observed stateful pod in namespace: statefulset-9072, name: ss-0, uid: d2ef42ca-6e7a-4f57-856e-60731cae3142, status phase: Failed. Waiting for statefulset controller to delete. May 27 00:05:07.863: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9072 STEP: Removing pod with conflicting port in namespace statefulset-9072 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9072 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 27 00:05:14.015: INFO: Deleting all statefulset in ns statefulset-9072 May 27 00:05:14.018: INFO: Scaling statefulset ss to 0 May 27 00:05:34.038: INFO: Waiting for statefulset status.replicas updated to 0 May 27 00:05:34.110: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:05:34.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9072" for this suite. • [SLOW TEST:31.062 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":73,"skipped":1395,"failed":0} S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:05:34.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-887 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-887 STEP: Deleting pre-stop pod May 27 00:05:47.287: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:05:47.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-887" for this suite. • [SLOW TEST:13.171 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":74,"skipped":1396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:05:47.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9deec2f9-a458-4768-8257-dc1f0d596850 STEP: Creating a pod to test consume configMaps May 27 00:05:47.435: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-16f9914c-86db-4ff2-9a58-182f8bfba209" in namespace "projected-9598" to be "Succeeded or Failed" May 27 00:05:47.438: INFO: Pod "pod-projected-configmaps-16f9914c-86db-4ff2-9a58-182f8bfba209": Phase="Pending", Reason="", readiness=false. Elapsed: 3.807275ms May 27 00:05:49.529: INFO: Pod "pod-projected-configmaps-16f9914c-86db-4ff2-9a58-182f8bfba209": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094764546s May 27 00:05:51.534: INFO: Pod "pod-projected-configmaps-16f9914c-86db-4ff2-9a58-182f8bfba209": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099050285s STEP: Saw pod success May 27 00:05:51.534: INFO: Pod "pod-projected-configmaps-16f9914c-86db-4ff2-9a58-182f8bfba209" satisfied condition "Succeeded or Failed" May 27 00:05:51.536: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-16f9914c-86db-4ff2-9a58-182f8bfba209 container projected-configmap-volume-test: STEP: delete the pod May 27 00:05:51.594: INFO: Waiting for pod pod-projected-configmaps-16f9914c-86db-4ff2-9a58-182f8bfba209 to disappear May 27 00:05:51.613: INFO: Pod pod-projected-configmaps-16f9914c-86db-4ff2-9a58-182f8bfba209 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:05:51.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9598" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":75,"skipped":1485,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:05:51.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-ca3439ab-ec2b-4b97-8607-a9770b50d193 STEP: Creating a pod to test consume secrets May 27 00:05:51.752: INFO: Waiting up to 5m0s for pod "pod-secrets-faaa3478-8279-408c-9194-f5293819a146" in namespace "secrets-331" to be "Succeeded or Failed" May 27 00:05:51.756: INFO: Pod "pod-secrets-faaa3478-8279-408c-9194-f5293819a146": Phase="Pending", Reason="", readiness=false. Elapsed: 3.925755ms May 27 00:05:53.761: INFO: Pod "pod-secrets-faaa3478-8279-408c-9194-f5293819a146": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008283664s May 27 00:05:55.764: INFO: Pod "pod-secrets-faaa3478-8279-408c-9194-f5293819a146": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012095975s STEP: Saw pod success May 27 00:05:55.764: INFO: Pod "pod-secrets-faaa3478-8279-408c-9194-f5293819a146" satisfied condition "Succeeded or Failed" May 27 00:05:55.768: INFO: Trying to get logs from node latest-worker pod pod-secrets-faaa3478-8279-408c-9194-f5293819a146 container secret-volume-test: STEP: delete the pod May 27 00:05:55.815: INFO: Waiting for pod pod-secrets-faaa3478-8279-408c-9194-f5293819a146 to disappear May 27 00:05:55.858: INFO: Pod pod-secrets-faaa3478-8279-408c-9194-f5293819a146 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:05:55.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-331" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":76,"skipped":1485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:05:55.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0527 00:05:58.766687 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 27 00:05:58.766: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:05:58.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8247" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":77,"skipped":1520,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:05:58.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-10c25342-99f9-4654-bea9-195e18a96f78 in namespace container-probe-5151 May 27 00:06:03.340: INFO: Started pod liveness-10c25342-99f9-4654-bea9-195e18a96f78 in namespace container-probe-5151 STEP: checking the pod's current state and verifying that restartCount is present May 27 00:06:03.343: INFO: Initial restart count of pod liveness-10c25342-99f9-4654-bea9-195e18a96f78 is 0 May 27 00:06:25.403: INFO: Restart count of pod container-probe-5151/liveness-10c25342-99f9-4654-bea9-195e18a96f78 is now 1 (22.059970197s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:06:25.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5151" for this suite. • [SLOW TEST:26.681 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":78,"skipped":1524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:06:25.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 27 00:06:35.953: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:35.953: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:35.991932 8 log.go:172] (0xc00276edc0) (0xc0017917c0) Create stream I0527 00:06:35.991961 8 log.go:172] (0xc00276edc0) (0xc0017917c0) Stream added, broadcasting: 1 I0527 00:06:35.994913 8 log.go:172] (0xc00276edc0) Reply frame received for 1 I0527 00:06:35.994941 8 log.go:172] (0xc00276edc0) (0xc002b1b040) Create stream I0527 00:06:35.994948 8 log.go:172] (0xc00276edc0) (0xc002b1b040) Stream added, broadcasting: 3 I0527 00:06:35.996120 8 log.go:172] (0xc00276edc0) Reply frame received for 3 I0527 00:06:35.996172 8 log.go:172] (0xc00276edc0) (0xc002b1b0e0) Create stream I0527 00:06:35.996188 8 log.go:172] (0xc00276edc0) (0xc002b1b0e0) Stream added, broadcasting: 5 I0527 00:06:35.997230 8 log.go:172] (0xc00276edc0) Reply frame received for 5 I0527 00:06:36.076674 8 log.go:172] (0xc00276edc0) Data frame received for 5 I0527 00:06:36.076698 8 log.go:172] (0xc002b1b0e0) (5) Data frame handling I0527 00:06:36.076725 8 log.go:172] (0xc00276edc0) Data frame received for 3 I0527 00:06:36.076757 8 log.go:172] (0xc002b1b040) (3) Data frame handling I0527 00:06:36.076777 8 log.go:172] (0xc002b1b040) (3) Data frame sent I0527 00:06:36.076788 8 log.go:172] (0xc00276edc0) Data frame received for 3 I0527 00:06:36.076802 8 log.go:172] (0xc002b1b040) (3) Data frame handling I0527 00:06:36.078719 8 log.go:172] (0xc00276edc0) Data frame received for 1 I0527 00:06:36.078744 8 log.go:172] (0xc0017917c0) (1) Data frame handling I0527 00:06:36.078767 8 log.go:172] (0xc0017917c0) (1) Data frame sent I0527 00:06:36.078784 8 log.go:172] (0xc00276edc0) (0xc0017917c0) Stream removed, broadcasting: 1 I0527 00:06:36.078801 8 log.go:172] (0xc00276edc0) Go away received I0527 00:06:36.078945 8 log.go:172] (0xc00276edc0) (0xc0017917c0) Stream removed, broadcasting: 1 I0527 00:06:36.078971 8 log.go:172] (0xc00276edc0) (0xc002b1b040) Stream removed, broadcasting: 3 I0527 00:06:36.078989 8 log.go:172] (0xc00276edc0) (0xc002b1b0e0) Stream removed, broadcasting: 5 May 27 00:06:36.079: INFO: Exec stderr: "" May 27 00:06:36.079: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.079: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.103348 8 log.go:172] (0xc0025cc4d0) (0xc00200e320) Create stream I0527 00:06:36.103377 8 log.go:172] (0xc0025cc4d0) (0xc00200e320) Stream added, broadcasting: 1 I0527 00:06:36.105634 8 log.go:172] (0xc0025cc4d0) Reply frame received for 1 I0527 00:06:36.105679 8 log.go:172] (0xc0025cc4d0) (0xc00200e3c0) Create stream I0527 00:06:36.105692 8 log.go:172] (0xc0025cc4d0) (0xc00200e3c0) Stream added, broadcasting: 3 I0527 00:06:36.106826 8 log.go:172] (0xc0025cc4d0) Reply frame received for 3 I0527 00:06:36.106874 8 log.go:172] (0xc0025cc4d0) (0xc000598460) Create stream I0527 00:06:36.106890 8 log.go:172] (0xc0025cc4d0) (0xc000598460) Stream added, broadcasting: 5 I0527 00:06:36.107863 8 log.go:172] (0xc0025cc4d0) Reply frame received for 5 I0527 00:06:36.168191 8 log.go:172] (0xc0025cc4d0) Data frame received for 3 I0527 00:06:36.168226 8 log.go:172] (0xc00200e3c0) (3) Data frame handling I0527 00:06:36.168244 8 log.go:172] (0xc00200e3c0) (3) Data frame sent I0527 00:06:36.168253 8 log.go:172] (0xc0025cc4d0) Data frame received for 3 I0527 00:06:36.168264 8 log.go:172] (0xc00200e3c0) (3) Data frame handling I0527 00:06:36.168302 8 log.go:172] (0xc0025cc4d0) Data frame received for 5 I0527 00:06:36.168336 8 log.go:172] (0xc000598460) (5) Data frame handling I0527 00:06:36.169876 8 log.go:172] (0xc0025cc4d0) Data frame received for 1 I0527 00:06:36.169898 8 log.go:172] (0xc00200e320) (1) Data frame handling I0527 00:06:36.169911 8 log.go:172] (0xc00200e320) (1) Data frame sent I0527 00:06:36.169935 8 log.go:172] (0xc0025cc4d0) (0xc00200e320) Stream removed, broadcasting: 1 I0527 00:06:36.169972 8 log.go:172] (0xc0025cc4d0) Go away received I0527 00:06:36.170052 8 log.go:172] (0xc0025cc4d0) (0xc00200e320) Stream removed, broadcasting: 1 I0527 00:06:36.170094 8 log.go:172] (0xc0025cc4d0) (0xc00200e3c0) Stream removed, broadcasting: 3 I0527 00:06:36.170142 8 log.go:172] (0xc0025cc4d0) (0xc000598460) Stream removed, broadcasting: 5 May 27 00:06:36.170: INFO: Exec stderr: "" May 27 00:06:36.170: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.170: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.205524 8 log.go:172] (0xc002f624d0) (0xc002b1b360) Create stream I0527 00:06:36.205559 8 log.go:172] (0xc002f624d0) (0xc002b1b360) Stream added, broadcasting: 1 I0527 00:06:36.208668 8 log.go:172] (0xc002f624d0) Reply frame received for 1 I0527 00:06:36.208772 8 log.go:172] (0xc002f624d0) (0xc002a76280) Create stream I0527 00:06:36.208785 8 log.go:172] (0xc002f624d0) (0xc002a76280) Stream added, broadcasting: 3 I0527 00:06:36.210000 8 log.go:172] (0xc002f624d0) Reply frame received for 3 I0527 00:06:36.210039 8 log.go:172] (0xc002f624d0) (0xc001791900) Create stream I0527 00:06:36.210054 8 log.go:172] (0xc002f624d0) (0xc001791900) Stream added, broadcasting: 5 I0527 00:06:36.210850 8 log.go:172] (0xc002f624d0) Reply frame received for 5 I0527 00:06:36.277067 8 log.go:172] (0xc002f624d0) Data frame received for 5 I0527 00:06:36.277094 8 log.go:172] (0xc001791900) (5) Data frame handling I0527 00:06:36.277316 8 log.go:172] (0xc002f624d0) Data frame received for 3 I0527 00:06:36.277406 8 log.go:172] (0xc002a76280) (3) Data frame handling I0527 00:06:36.277427 8 log.go:172] (0xc002a76280) (3) Data frame sent I0527 00:06:36.277438 8 log.go:172] (0xc002f624d0) Data frame received for 3 I0527 00:06:36.277447 8 log.go:172] (0xc002a76280) (3) Data frame handling I0527 00:06:36.278745 8 log.go:172] (0xc002f624d0) Data frame received for 1 I0527 00:06:36.278773 8 log.go:172] (0xc002b1b360) (1) Data frame handling I0527 00:06:36.278806 8 log.go:172] (0xc002b1b360) (1) Data frame sent I0527 00:06:36.278827 8 log.go:172] (0xc002f624d0) (0xc002b1b360) Stream removed, broadcasting: 1 I0527 00:06:36.278846 8 log.go:172] (0xc002f624d0) Go away received I0527 00:06:36.279054 8 log.go:172] (0xc002f624d0) (0xc002b1b360) Stream removed, broadcasting: 1 I0527 00:06:36.279098 8 log.go:172] (0xc002f624d0) (0xc002a76280) Stream removed, broadcasting: 3 I0527 00:06:36.279131 8 log.go:172] (0xc002f624d0) (0xc001791900) Stream removed, broadcasting: 5 May 27 00:06:36.279: INFO: Exec stderr: "" May 27 00:06:36.279: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.279: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.325998 8 log.go:172] (0xc00276f760) (0xc001791cc0) Create stream I0527 00:06:36.326045 8 log.go:172] (0xc00276f760) (0xc001791cc0) Stream added, broadcasting: 1 I0527 00:06:36.329630 8 log.go:172] (0xc00276f760) Reply frame received for 1 I0527 00:06:36.329677 8 log.go:172] (0xc00276f760) (0xc00200e820) Create stream I0527 00:06:36.329696 8 log.go:172] (0xc00276f760) (0xc00200e820) Stream added, broadcasting: 3 I0527 00:06:36.330928 8 log.go:172] (0xc00276f760) Reply frame received for 3 I0527 00:06:36.330990 8 log.go:172] (0xc00276f760) (0xc002b1b400) Create stream I0527 00:06:36.331011 8 log.go:172] (0xc00276f760) (0xc002b1b400) Stream added, broadcasting: 5 I0527 00:06:36.332039 8 log.go:172] (0xc00276f760) Reply frame received for 5 I0527 00:06:36.423862 8 log.go:172] (0xc00276f760) Data frame received for 5 I0527 00:06:36.423923 8 log.go:172] (0xc002b1b400) (5) Data frame handling I0527 00:06:36.423962 8 log.go:172] (0xc00276f760) Data frame received for 3 I0527 00:06:36.423979 8 log.go:172] (0xc00200e820) (3) Data frame handling I0527 00:06:36.424004 8 log.go:172] (0xc00200e820) (3) Data frame sent I0527 00:06:36.424022 8 log.go:172] (0xc00276f760) Data frame received for 3 I0527 00:06:36.424044 8 log.go:172] (0xc00200e820) (3) Data frame handling I0527 00:06:36.425483 8 log.go:172] (0xc00276f760) Data frame received for 1 I0527 00:06:36.425516 8 log.go:172] (0xc001791cc0) (1) Data frame handling I0527 00:06:36.425540 8 log.go:172] (0xc001791cc0) (1) Data frame sent I0527 00:06:36.425564 8 log.go:172] (0xc00276f760) (0xc001791cc0) Stream removed, broadcasting: 1 I0527 00:06:36.425636 8 log.go:172] (0xc00276f760) Go away received I0527 00:06:36.425911 8 log.go:172] (0xc00276f760) (0xc001791cc0) Stream removed, broadcasting: 1 I0527 00:06:36.425937 8 log.go:172] (0xc00276f760) (0xc00200e820) Stream removed, broadcasting: 3 I0527 00:06:36.425956 8 log.go:172] (0xc00276f760) (0xc002b1b400) Stream removed, broadcasting: 5 May 27 00:06:36.425: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 27 00:06:36.426: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.426: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.452832 8 log.go:172] (0xc002f62a50) (0xc002b1b540) Create stream I0527 00:06:36.452867 8 log.go:172] (0xc002f62a50) (0xc002b1b540) Stream added, broadcasting: 1 I0527 00:06:36.456252 8 log.go:172] (0xc002f62a50) Reply frame received for 1 I0527 00:06:36.456296 8 log.go:172] (0xc002f62a50) (0xc002a76320) Create stream I0527 00:06:36.456310 8 log.go:172] (0xc002f62a50) (0xc002a76320) Stream added, broadcasting: 3 I0527 00:06:36.457426 8 log.go:172] (0xc002f62a50) Reply frame received for 3 I0527 00:06:36.457471 8 log.go:172] (0xc002f62a50) (0xc002a763c0) Create stream I0527 00:06:36.457488 8 log.go:172] (0xc002f62a50) (0xc002a763c0) Stream added, broadcasting: 5 I0527 00:06:36.458311 8 log.go:172] (0xc002f62a50) Reply frame received for 5 I0527 00:06:36.516773 8 log.go:172] (0xc002f62a50) Data frame received for 3 I0527 00:06:36.516822 8 log.go:172] (0xc002a76320) (3) Data frame handling I0527 00:06:36.516952 8 log.go:172] (0xc002a76320) (3) Data frame sent I0527 00:06:36.516989 8 log.go:172] (0xc002f62a50) Data frame received for 3 I0527 00:06:36.517012 8 log.go:172] (0xc002a76320) (3) Data frame handling I0527 00:06:36.517075 8 log.go:172] (0xc002f62a50) Data frame received for 5 I0527 00:06:36.517102 8 log.go:172] (0xc002a763c0) (5) Data frame handling I0527 00:06:36.518887 8 log.go:172] (0xc002f62a50) Data frame received for 1 I0527 00:06:36.518912 8 log.go:172] (0xc002b1b540) (1) Data frame handling I0527 00:06:36.518928 8 log.go:172] (0xc002b1b540) (1) Data frame sent I0527 00:06:36.518952 8 log.go:172] (0xc002f62a50) (0xc002b1b540) Stream removed, broadcasting: 1 I0527 00:06:36.518987 8 log.go:172] (0xc002f62a50) Go away received I0527 00:06:36.519124 8 log.go:172] (0xc002f62a50) (0xc002b1b540) Stream removed, broadcasting: 1 I0527 00:06:36.519157 8 log.go:172] (0xc002f62a50) (0xc002a76320) Stream removed, broadcasting: 3 I0527 00:06:36.519168 8 log.go:172] (0xc002f62a50) (0xc002a763c0) Stream removed, broadcasting: 5 May 27 00:06:36.519: INFO: Exec stderr: "" May 27 00:06:36.519: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.519: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.551778 8 log.go:172] (0xc0023be420) (0xc002a76640) Create stream I0527 00:06:36.551814 8 log.go:172] (0xc0023be420) (0xc002a76640) Stream added, broadcasting: 1 I0527 00:06:36.555123 8 log.go:172] (0xc0023be420) Reply frame received for 1 I0527 00:06:36.555181 8 log.go:172] (0xc0023be420) (0xc002a768c0) Create stream I0527 00:06:36.555218 8 log.go:172] (0xc0023be420) (0xc002a768c0) Stream added, broadcasting: 3 I0527 00:06:36.556034 8 log.go:172] (0xc0023be420) Reply frame received for 3 I0527 00:06:36.556072 8 log.go:172] (0xc0023be420) (0xc00200e960) Create stream I0527 00:06:36.556090 8 log.go:172] (0xc0023be420) (0xc00200e960) Stream added, broadcasting: 5 I0527 00:06:36.556843 8 log.go:172] (0xc0023be420) Reply frame received for 5 I0527 00:06:36.615840 8 log.go:172] (0xc0023be420) Data frame received for 5 I0527 00:06:36.615904 8 log.go:172] (0xc00200e960) (5) Data frame handling I0527 00:06:36.615944 8 log.go:172] (0xc0023be420) Data frame received for 3 I0527 00:06:36.615965 8 log.go:172] (0xc002a768c0) (3) Data frame handling I0527 00:06:36.615995 8 log.go:172] (0xc002a768c0) (3) Data frame sent I0527 00:06:36.616019 8 log.go:172] (0xc0023be420) Data frame received for 3 I0527 00:06:36.616040 8 log.go:172] (0xc002a768c0) (3) Data frame handling I0527 00:06:36.617807 8 log.go:172] (0xc0023be420) Data frame received for 1 I0527 00:06:36.617828 8 log.go:172] (0xc002a76640) (1) Data frame handling I0527 00:06:36.617839 8 log.go:172] (0xc002a76640) (1) Data frame sent I0527 00:06:36.617851 8 log.go:172] (0xc0023be420) (0xc002a76640) Stream removed, broadcasting: 1 I0527 00:06:36.617866 8 log.go:172] (0xc0023be420) Go away received I0527 00:06:36.618223 8 log.go:172] (0xc0023be420) (0xc002a76640) Stream removed, broadcasting: 1 I0527 00:06:36.618242 8 log.go:172] (0xc0023be420) (0xc002a768c0) Stream removed, broadcasting: 3 I0527 00:06:36.618253 8 log.go:172] (0xc0023be420) (0xc00200e960) Stream removed, broadcasting: 5 May 27 00:06:36.618: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 27 00:06:36.618: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.618: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.652553 8 log.go:172] (0xc0023bea50) (0xc002a76c80) Create stream I0527 00:06:36.652591 8 log.go:172] (0xc0023bea50) (0xc002a76c80) Stream added, broadcasting: 1 I0527 00:06:36.655584 8 log.go:172] (0xc0023bea50) Reply frame received for 1 I0527 00:06:36.655623 8 log.go:172] (0xc0023bea50) (0xc0005985a0) Create stream I0527 00:06:36.655631 8 log.go:172] (0xc0023bea50) (0xc0005985a0) Stream added, broadcasting: 3 I0527 00:06:36.656665 8 log.go:172] (0xc0023bea50) Reply frame received for 3 I0527 00:06:36.656717 8 log.go:172] (0xc0023bea50) (0xc00255a000) Create stream I0527 00:06:36.656740 8 log.go:172] (0xc0023bea50) (0xc00255a000) Stream added, broadcasting: 5 I0527 00:06:36.658009 8 log.go:172] (0xc0023bea50) Reply frame received for 5 I0527 00:06:36.750283 8 log.go:172] (0xc0023bea50) Data frame received for 5 I0527 00:06:36.750326 8 log.go:172] (0xc00255a000) (5) Data frame handling I0527 00:06:36.750353 8 log.go:172] (0xc0023bea50) Data frame received for 3 I0527 00:06:36.750366 8 log.go:172] (0xc0005985a0) (3) Data frame handling I0527 00:06:36.750381 8 log.go:172] (0xc0005985a0) (3) Data frame sent I0527 00:06:36.750395 8 log.go:172] (0xc0023bea50) Data frame received for 3 I0527 00:06:36.750406 8 log.go:172] (0xc0005985a0) (3) Data frame handling I0527 00:06:36.751873 8 log.go:172] (0xc0023bea50) Data frame received for 1 I0527 00:06:36.751910 8 log.go:172] (0xc002a76c80) (1) Data frame handling I0527 00:06:36.751941 8 log.go:172] (0xc002a76c80) (1) Data frame sent I0527 00:06:36.751968 8 log.go:172] (0xc0023bea50) (0xc002a76c80) Stream removed, broadcasting: 1 I0527 00:06:36.752020 8 log.go:172] (0xc0023bea50) Go away received I0527 00:06:36.752215 8 log.go:172] (0xc0023bea50) (0xc002a76c80) Stream removed, broadcasting: 1 I0527 00:06:36.752245 8 log.go:172] (0xc0023bea50) (0xc0005985a0) Stream removed, broadcasting: 3 I0527 00:06:36.752267 8 log.go:172] (0xc0023bea50) (0xc00255a000) Stream removed, broadcasting: 5 May 27 00:06:36.752: INFO: Exec stderr: "" May 27 00:06:36.752: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.752: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.785480 8 log.go:172] (0xc002f63080) (0xc002b1b720) Create stream I0527 00:06:36.785511 8 log.go:172] (0xc002f63080) (0xc002b1b720) Stream added, broadcasting: 1 I0527 00:06:36.788045 8 log.go:172] (0xc002f63080) Reply frame received for 1 I0527 00:06:36.788092 8 log.go:172] (0xc002f63080) (0xc00255a0a0) Create stream I0527 00:06:36.788108 8 log.go:172] (0xc002f63080) (0xc00255a0a0) Stream added, broadcasting: 3 I0527 00:06:36.789000 8 log.go:172] (0xc002f63080) Reply frame received for 3 I0527 00:06:36.789034 8 log.go:172] (0xc002f63080) (0xc000598aa0) Create stream I0527 00:06:36.789048 8 log.go:172] (0xc002f63080) (0xc000598aa0) Stream added, broadcasting: 5 I0527 00:06:36.790134 8 log.go:172] (0xc002f63080) Reply frame received for 5 I0527 00:06:36.834700 8 log.go:172] (0xc002f63080) Data frame received for 3 I0527 00:06:36.834740 8 log.go:172] (0xc00255a0a0) (3) Data frame handling I0527 00:06:36.834751 8 log.go:172] (0xc00255a0a0) (3) Data frame sent I0527 00:06:36.834759 8 log.go:172] (0xc002f63080) Data frame received for 3 I0527 00:06:36.834763 8 log.go:172] (0xc00255a0a0) (3) Data frame handling I0527 00:06:36.834835 8 log.go:172] (0xc002f63080) Data frame received for 5 I0527 00:06:36.834854 8 log.go:172] (0xc000598aa0) (5) Data frame handling I0527 00:06:36.835948 8 log.go:172] (0xc002f63080) Data frame received for 1 I0527 00:06:36.835961 8 log.go:172] (0xc002b1b720) (1) Data frame handling I0527 00:06:36.835975 8 log.go:172] (0xc002b1b720) (1) Data frame sent I0527 00:06:36.835988 8 log.go:172] (0xc002f63080) (0xc002b1b720) Stream removed, broadcasting: 1 I0527 00:06:36.836004 8 log.go:172] (0xc002f63080) Go away received I0527 00:06:36.836119 8 log.go:172] (0xc002f63080) (0xc002b1b720) Stream removed, broadcasting: 1 I0527 00:06:36.836140 8 log.go:172] (0xc002f63080) (0xc00255a0a0) Stream removed, broadcasting: 3 I0527 00:06:36.836154 8 log.go:172] (0xc002f63080) (0xc000598aa0) Stream removed, broadcasting: 5 May 27 00:06:36.836: INFO: Exec stderr: "" May 27 00:06:36.836: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.836: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.864996 8 log.go:172] (0xc002aaa000) (0xc00255a280) Create stream I0527 00:06:36.865026 8 log.go:172] (0xc002aaa000) (0xc00255a280) Stream added, broadcasting: 1 I0527 00:06:36.868373 8 log.go:172] (0xc002aaa000) Reply frame received for 1 I0527 00:06:36.868423 8 log.go:172] (0xc002aaa000) (0xc000599860) Create stream I0527 00:06:36.868443 8 log.go:172] (0xc002aaa000) (0xc000599860) Stream added, broadcasting: 3 I0527 00:06:36.869657 8 log.go:172] (0xc002aaa000) Reply frame received for 3 I0527 00:06:36.869735 8 log.go:172] (0xc002aaa000) (0xc002b1b7c0) Create stream I0527 00:06:36.869757 8 log.go:172] (0xc002aaa000) (0xc002b1b7c0) Stream added, broadcasting: 5 I0527 00:06:36.870878 8 log.go:172] (0xc002aaa000) Reply frame received for 5 I0527 00:06:36.930913 8 log.go:172] (0xc002aaa000) Data frame received for 5 I0527 00:06:36.930933 8 log.go:172] (0xc002b1b7c0) (5) Data frame handling I0527 00:06:36.930955 8 log.go:172] (0xc002aaa000) Data frame received for 3 I0527 00:06:36.930961 8 log.go:172] (0xc000599860) (3) Data frame handling I0527 00:06:36.930969 8 log.go:172] (0xc000599860) (3) Data frame sent I0527 00:06:36.930978 8 log.go:172] (0xc002aaa000) Data frame received for 3 I0527 00:06:36.930983 8 log.go:172] (0xc000599860) (3) Data frame handling I0527 00:06:36.932536 8 log.go:172] (0xc002aaa000) Data frame received for 1 I0527 00:06:36.932552 8 log.go:172] (0xc00255a280) (1) Data frame handling I0527 00:06:36.932569 8 log.go:172] (0xc00255a280) (1) Data frame sent I0527 00:06:36.932582 8 log.go:172] (0xc002aaa000) (0xc00255a280) Stream removed, broadcasting: 1 I0527 00:06:36.932595 8 log.go:172] (0xc002aaa000) Go away received I0527 00:06:36.932727 8 log.go:172] (0xc002aaa000) (0xc00255a280) Stream removed, broadcasting: 1 I0527 00:06:36.932752 8 log.go:172] (0xc002aaa000) (0xc000599860) Stream removed, broadcasting: 3 I0527 00:06:36.932765 8 log.go:172] (0xc002aaa000) (0xc002b1b7c0) Stream removed, broadcasting: 5 May 27 00:06:36.932: INFO: Exec stderr: "" May 27 00:06:36.932: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3394 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:06:36.932: INFO: >>> kubeConfig: /root/.kube/config I0527 00:06:36.967045 8 log.go:172] (0xc001aa0160) (0xc001f4a1e0) Create stream I0527 00:06:36.967081 8 log.go:172] (0xc001aa0160) (0xc001f4a1e0) Stream added, broadcasting: 1 I0527 00:06:36.971151 8 log.go:172] (0xc001aa0160) Reply frame received for 1 I0527 00:06:36.971197 8 log.go:172] (0xc001aa0160) (0xc00200ea00) Create stream I0527 00:06:36.971214 8 log.go:172] (0xc001aa0160) (0xc00200ea00) Stream added, broadcasting: 3 I0527 00:06:36.972311 8 log.go:172] (0xc001aa0160) Reply frame received for 3 I0527 00:06:36.972352 8 log.go:172] (0xc001aa0160) (0xc00255a320) Create stream I0527 00:06:36.972378 8 log.go:172] (0xc001aa0160) (0xc00255a320) Stream added, broadcasting: 5 I0527 00:06:36.973574 8 log.go:172] (0xc001aa0160) Reply frame received for 5 I0527 00:06:37.044670 8 log.go:172] (0xc001aa0160) Data frame received for 5 I0527 00:06:37.044715 8 log.go:172] (0xc00255a320) (5) Data frame handling I0527 00:06:37.044741 8 log.go:172] (0xc001aa0160) Data frame received for 3 I0527 00:06:37.044754 8 log.go:172] (0xc00200ea00) (3) Data frame handling I0527 00:06:37.044770 8 log.go:172] (0xc00200ea00) (3) Data frame sent I0527 00:06:37.044786 8 log.go:172] (0xc001aa0160) Data frame received for 3 I0527 00:06:37.044798 8 log.go:172] (0xc00200ea00) (3) Data frame handling I0527 00:06:37.046431 8 log.go:172] (0xc001aa0160) Data frame received for 1 I0527 00:06:37.046504 8 log.go:172] (0xc001f4a1e0) (1) Data frame handling I0527 00:06:37.046547 8 log.go:172] (0xc001f4a1e0) (1) Data frame sent I0527 00:06:37.046570 8 log.go:172] (0xc001aa0160) (0xc001f4a1e0) Stream removed, broadcasting: 1 I0527 00:06:37.046594 8 log.go:172] (0xc001aa0160) Go away received I0527 00:06:37.046705 8 log.go:172] (0xc001aa0160) (0xc001f4a1e0) Stream removed, broadcasting: 1 I0527 00:06:37.046729 8 log.go:172] (0xc001aa0160) (0xc00200ea00) Stream removed, broadcasting: 3 I0527 00:06:37.046744 8 log.go:172] (0xc001aa0160) (0xc00255a320) Stream removed, broadcasting: 5 May 27 00:06:37.046: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:06:37.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3394" for this suite. • [SLOW TEST:11.645 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1557,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:06:37.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 27 00:06:41.770: INFO: Successfully updated pod "annotationupdatebb4cb82b-b063-4c14-b532-383ffc450fce" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:06:45.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2206" for this suite. • [SLOW TEST:8.719 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1560,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:06:45.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 27 00:06:45.893: INFO: Waiting up to 5m0s for pod "pod-da5350a5-19da-4f1d-b88c-6be81bf4892a" in namespace "emptydir-3204" to be "Succeeded or Failed" May 27 00:06:45.905: INFO: Pod "pod-da5350a5-19da-4f1d-b88c-6be81bf4892a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.977115ms May 27 00:06:48.075: INFO: Pod "pod-da5350a5-19da-4f1d-b88c-6be81bf4892a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182409152s May 27 00:06:50.079: INFO: Pod "pod-da5350a5-19da-4f1d-b88c-6be81bf4892a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185759527s STEP: Saw pod success May 27 00:06:50.079: INFO: Pod "pod-da5350a5-19da-4f1d-b88c-6be81bf4892a" satisfied condition "Succeeded or Failed" May 27 00:06:50.081: INFO: Trying to get logs from node latest-worker pod pod-da5350a5-19da-4f1d-b88c-6be81bf4892a container test-container: STEP: delete the pod May 27 00:06:50.115: INFO: Waiting for pod pod-da5350a5-19da-4f1d-b88c-6be81bf4892a to disappear May 27 00:06:50.118: INFO: Pod pod-da5350a5-19da-4f1d-b88c-6be81bf4892a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:06:50.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3204" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":81,"skipped":1564,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:06:50.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:06:55.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2701" for this suite. • [SLOW TEST:5.432 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":82,"skipped":1571,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:06:55.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:06:56.482: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:06:58.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134816, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134816, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134816, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134816, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:07:01.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:07:01.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-531" for this suite. STEP: Destroying namespace "webhook-531-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.543 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":83,"skipped":1592,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:07:02.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:07:03.058: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:07:05.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134823, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134823, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134823, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134822, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:07:08.244: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:07:08.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5314" for this suite. STEP: Destroying namespace "webhook-5314-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.353 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":84,"skipped":1605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:07:08.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-199 STEP: creating service affinity-nodeport in namespace services-199 STEP: creating replication controller affinity-nodeport in namespace services-199 I0527 00:07:08.986996 8 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-199, replica count: 3 I0527 00:07:12.037814 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:07:15.038034 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:07:18.038283 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 00:07:18.049: INFO: Creating new exec pod May 27 00:07:23.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-199 execpod-affinityrzp7b -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 27 00:07:26.763: INFO: stderr: "I0527 00:07:26.630234 555 log.go:172] (0xc00003a160) (0xc000815360) Create stream\nI0527 00:07:26.630272 555 log.go:172] (0xc00003a160) (0xc000815360) Stream added, broadcasting: 1\nI0527 00:07:26.633640 555 log.go:172] (0xc00003a160) Reply frame received for 1\nI0527 00:07:26.633706 555 log.go:172] (0xc00003a160) (0xc00072cb40) Create stream\nI0527 00:07:26.633733 555 log.go:172] (0xc00003a160) (0xc00072cb40) Stream added, broadcasting: 3\nI0527 00:07:26.634792 555 log.go:172] (0xc00003a160) Reply frame received for 3\nI0527 00:07:26.634830 555 log.go:172] (0xc00003a160) (0xc00072dae0) Create stream\nI0527 00:07:26.634839 555 log.go:172] (0xc00003a160) (0xc00072dae0) Stream added, broadcasting: 5\nI0527 00:07:26.635846 555 log.go:172] (0xc00003a160) Reply frame received for 5\nI0527 00:07:26.732225 555 log.go:172] (0xc00003a160) Data frame received for 5\nI0527 00:07:26.732256 555 log.go:172] (0xc00072dae0) (5) Data frame handling\nI0527 00:07:26.732277 555 log.go:172] (0xc00072dae0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0527 00:07:26.753349 555 log.go:172] (0xc00003a160) Data frame received for 5\nI0527 00:07:26.753379 555 log.go:172] (0xc00072dae0) (5) Data frame handling\nI0527 00:07:26.753396 555 log.go:172] (0xc00072dae0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0527 00:07:26.753792 555 log.go:172] (0xc00003a160) Data frame received for 5\nI0527 00:07:26.753825 555 log.go:172] (0xc00072dae0) (5) Data frame handling\nI0527 00:07:26.754246 555 log.go:172] (0xc00003a160) Data frame received for 3\nI0527 00:07:26.754290 555 log.go:172] (0xc00072cb40) (3) Data frame handling\nI0527 00:07:26.756278 555 log.go:172] (0xc00003a160) Data frame received for 1\nI0527 00:07:26.756300 555 log.go:172] (0xc000815360) (1) Data frame handling\nI0527 00:07:26.756309 555 log.go:172] (0xc000815360) (1) Data frame sent\nI0527 00:07:26.756327 555 log.go:172] (0xc00003a160) (0xc000815360) Stream removed, broadcasting: 1\nI0527 00:07:26.756505 555 log.go:172] (0xc00003a160) Go away received\nI0527 00:07:26.756614 555 log.go:172] (0xc00003a160) (0xc000815360) Stream removed, broadcasting: 1\nI0527 00:07:26.756639 555 log.go:172] (0xc00003a160) (0xc00072cb40) Stream removed, broadcasting: 3\nI0527 00:07:26.756646 555 log.go:172] (0xc00003a160) (0xc00072dae0) Stream removed, broadcasting: 5\n" May 27 00:07:26.763: INFO: stdout: "" May 27 00:07:26.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-199 execpod-affinityrzp7b -- /bin/sh -x -c nc -zv -t -w 2 10.102.84.10 80' May 27 00:07:26.972: INFO: stderr: "I0527 00:07:26.896942 588 log.go:172] (0xc0000ea4d0) (0xc0002601e0) Create stream\nI0527 00:07:26.897028 588 log.go:172] (0xc0000ea4d0) (0xc0002601e0) Stream added, broadcasting: 1\nI0527 00:07:26.900557 588 log.go:172] (0xc0000ea4d0) Reply frame received for 1\nI0527 00:07:26.900605 588 log.go:172] (0xc0000ea4d0) (0xc0004005a0) Create stream\nI0527 00:07:26.900622 588 log.go:172] (0xc0000ea4d0) (0xc0004005a0) Stream added, broadcasting: 3\nI0527 00:07:26.901790 588 log.go:172] (0xc0000ea4d0) Reply frame received for 3\nI0527 00:07:26.901832 588 log.go:172] (0xc0000ea4d0) (0xc0003c8280) Create stream\nI0527 00:07:26.901842 588 log.go:172] (0xc0000ea4d0) (0xc0003c8280) Stream added, broadcasting: 5\nI0527 00:07:26.902703 588 log.go:172] (0xc0000ea4d0) Reply frame received for 5\nI0527 00:07:26.964812 588 log.go:172] (0xc0000ea4d0) Data frame received for 5\nI0527 00:07:26.964855 588 log.go:172] (0xc0003c8280) (5) Data frame handling\nI0527 00:07:26.964874 588 log.go:172] (0xc0003c8280) (5) Data frame sent\n+ nc -zv -t -w 2 10.102.84.10 80\nConnection to 10.102.84.10 80 port [tcp/http] succeeded!\nI0527 00:07:26.964927 588 log.go:172] (0xc0000ea4d0) Data frame received for 3\nI0527 00:07:26.964965 588 log.go:172] (0xc0004005a0) (3) Data frame handling\nI0527 00:07:26.964999 588 log.go:172] (0xc0000ea4d0) Data frame received for 5\nI0527 00:07:26.965020 588 log.go:172] (0xc0003c8280) (5) Data frame handling\nI0527 00:07:26.966628 588 log.go:172] (0xc0000ea4d0) Data frame received for 1\nI0527 00:07:26.966647 588 log.go:172] (0xc0002601e0) (1) Data frame handling\nI0527 00:07:26.966678 588 log.go:172] (0xc0002601e0) (1) Data frame sent\nI0527 00:07:26.966711 588 log.go:172] (0xc0000ea4d0) (0xc0002601e0) Stream removed, broadcasting: 1\nI0527 00:07:26.966780 588 log.go:172] (0xc0000ea4d0) Go away received\nI0527 00:07:26.967051 588 log.go:172] (0xc0000ea4d0) (0xc0002601e0) Stream removed, broadcasting: 1\nI0527 00:07:26.967066 588 log.go:172] (0xc0000ea4d0) (0xc0004005a0) Stream removed, broadcasting: 3\nI0527 00:07:26.967075 588 log.go:172] (0xc0000ea4d0) (0xc0003c8280) Stream removed, broadcasting: 5\n" May 27 00:07:26.972: INFO: stdout: "" May 27 00:07:26.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-199 execpod-affinityrzp7b -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30566' May 27 00:07:27.177: INFO: stderr: "I0527 00:07:27.101009 609 log.go:172] (0xc000b311e0) (0xc000307ae0) Create stream\nI0527 00:07:27.101067 609 log.go:172] (0xc000b311e0) (0xc000307ae0) Stream added, broadcasting: 1\nI0527 00:07:27.104008 609 log.go:172] (0xc000b311e0) Reply frame received for 1\nI0527 00:07:27.104066 609 log.go:172] (0xc000b311e0) (0xc000644000) Create stream\nI0527 00:07:27.104086 609 log.go:172] (0xc000b311e0) (0xc000644000) Stream added, broadcasting: 3\nI0527 00:07:27.104993 609 log.go:172] (0xc000b311e0) Reply frame received for 3\nI0527 00:07:27.105022 609 log.go:172] (0xc000b311e0) (0xc000644820) Create stream\nI0527 00:07:27.105030 609 log.go:172] (0xc000b311e0) (0xc000644820) Stream added, broadcasting: 5\nI0527 00:07:27.106094 609 log.go:172] (0xc000b311e0) Reply frame received for 5\nI0527 00:07:27.170326 609 log.go:172] (0xc000b311e0) Data frame received for 5\nI0527 00:07:27.170356 609 log.go:172] (0xc000644820) (5) Data frame handling\nI0527 00:07:27.170379 609 log.go:172] (0xc000644820) (5) Data frame sent\nI0527 00:07:27.170401 609 log.go:172] (0xc000b311e0) Data frame received for 5\nI0527 00:07:27.170418 609 log.go:172] (0xc000644820) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30566\nConnection to 172.17.0.13 30566 port [tcp/30566] succeeded!\nI0527 00:07:27.170461 609 log.go:172] (0xc000b311e0) Data frame received for 3\nI0527 00:07:27.170487 609 log.go:172] (0xc000644000) (3) Data frame handling\nI0527 00:07:27.171901 609 log.go:172] (0xc000b311e0) Data frame received for 1\nI0527 00:07:27.171921 609 log.go:172] (0xc000307ae0) (1) Data frame handling\nI0527 00:07:27.171932 609 log.go:172] (0xc000307ae0) (1) Data frame sent\nI0527 00:07:27.171946 609 log.go:172] (0xc000b311e0) (0xc000307ae0) Stream removed, broadcasting: 1\nI0527 00:07:27.171964 609 log.go:172] (0xc000b311e0) Go away received\nI0527 00:07:27.172235 609 log.go:172] (0xc000b311e0) (0xc000307ae0) Stream removed, broadcasting: 1\nI0527 00:07:27.172248 609 log.go:172] (0xc000b311e0) (0xc000644000) Stream removed, broadcasting: 3\nI0527 00:07:27.172253 609 log.go:172] (0xc000b311e0) (0xc000644820) Stream removed, broadcasting: 5\n" May 27 00:07:27.177: INFO: stdout: "" May 27 00:07:27.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-199 execpod-affinityrzp7b -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30566' May 27 00:07:27.547: INFO: stderr: "I0527 00:07:27.327526 628 log.go:172] (0xc00054c000) (0xc000846f00) Create stream\nI0527 00:07:27.327593 628 log.go:172] (0xc00054c000) (0xc000846f00) Stream added, broadcasting: 1\nI0527 00:07:27.329926 628 log.go:172] (0xc00054c000) Reply frame received for 1\nI0527 00:07:27.330079 628 log.go:172] (0xc00054c000) (0xc00083e640) Create stream\nI0527 00:07:27.330113 628 log.go:172] (0xc00054c000) (0xc00083e640) Stream added, broadcasting: 3\nI0527 00:07:27.331399 628 log.go:172] (0xc00054c000) Reply frame received for 3\nI0527 00:07:27.331446 628 log.go:172] (0xc00054c000) (0xc0003b40a0) Create stream\nI0527 00:07:27.331460 628 log.go:172] (0xc00054c000) (0xc0003b40a0) Stream added, broadcasting: 5\nI0527 00:07:27.332572 628 log.go:172] (0xc00054c000) Reply frame received for 5\nI0527 00:07:27.540394 628 log.go:172] (0xc00054c000) Data frame received for 5\nI0527 00:07:27.540420 628 log.go:172] (0xc0003b40a0) (5) Data frame handling\nI0527 00:07:27.540436 628 log.go:172] (0xc0003b40a0) (5) Data frame sent\nI0527 00:07:27.540447 628 log.go:172] (0xc00054c000) Data frame received for 5\nI0527 00:07:27.540456 628 log.go:172] (0xc0003b40a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30566\nConnection to 172.17.0.12 30566 port [tcp/30566] succeeded!\nI0527 00:07:27.540589 628 log.go:172] (0xc0003b40a0) (5) Data frame sent\nI0527 00:07:27.540932 628 log.go:172] (0xc00054c000) Data frame received for 3\nI0527 00:07:27.540960 628 log.go:172] (0xc00083e640) (3) Data frame handling\nI0527 00:07:27.541033 628 log.go:172] (0xc00054c000) Data frame received for 5\nI0527 00:07:27.541076 628 log.go:172] (0xc0003b40a0) (5) Data frame handling\nI0527 00:07:27.542995 628 log.go:172] (0xc00054c000) Data frame received for 1\nI0527 00:07:27.543018 628 log.go:172] (0xc000846f00) (1) Data frame handling\nI0527 00:07:27.543047 628 log.go:172] (0xc000846f00) (1) Data frame sent\nI0527 00:07:27.543065 628 log.go:172] (0xc00054c000) (0xc000846f00) Stream removed, broadcasting: 1\nI0527 00:07:27.543202 628 log.go:172] (0xc00054c000) Go away received\nI0527 00:07:27.543363 628 log.go:172] (0xc00054c000) (0xc000846f00) Stream removed, broadcasting: 1\nI0527 00:07:27.543387 628 log.go:172] (0xc00054c000) (0xc00083e640) Stream removed, broadcasting: 3\nI0527 00:07:27.543406 628 log.go:172] (0xc00054c000) (0xc0003b40a0) Stream removed, broadcasting: 5\n" May 27 00:07:27.547: INFO: stdout: "" May 27 00:07:27.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-199 execpod-affinityrzp7b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30566/ ; done' May 27 00:07:30.989: INFO: stderr: "I0527 00:07:30.801528 648 log.go:172] (0xc000a57a20) (0xc000b260a0) Create stream\nI0527 00:07:30.801614 648 log.go:172] (0xc000a57a20) (0xc000b260a0) Stream added, broadcasting: 1\nI0527 00:07:30.805709 648 log.go:172] (0xc000a57a20) Reply frame received for 1\nI0527 00:07:30.805775 648 log.go:172] (0xc000a57a20) (0xc000704000) Create stream\nI0527 00:07:30.805783 648 log.go:172] (0xc000a57a20) (0xc000704000) Stream added, broadcasting: 3\nI0527 00:07:30.806588 648 log.go:172] (0xc000a57a20) Reply frame received for 3\nI0527 00:07:30.806626 648 log.go:172] (0xc000a57a20) (0xc0006bf040) Create stream\nI0527 00:07:30.806633 648 log.go:172] (0xc000a57a20) (0xc0006bf040) Stream added, broadcasting: 5\nI0527 00:07:30.807356 648 log.go:172] (0xc000a57a20) Reply frame received for 5\nI0527 00:07:30.897840 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.897891 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.897917 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.897945 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.897954 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.897970 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.903924 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.903945 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.903961 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.904375 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.904402 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.904415 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.904434 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.904446 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.904459 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.908707 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.908731 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.908763 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.909444 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.909487 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.909502 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.909538 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.909552 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.909572 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.913844 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.913879 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.913903 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.914155 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.914192 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.914206 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.914225 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.914235 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.914249 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.918708 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.918731 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.918752 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.919013 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.919048 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.919062 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.919082 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.919091 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.919109 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.923245 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.923277 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.923308 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.923655 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.923666 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.923674 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.923749 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.923771 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.923788 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.930682 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.930699 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.930716 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.931288 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.931335 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.931372 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.931404 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.931426 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.931443 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0527 00:07:30.931456 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.931494 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.931525 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n 2 http://172.17.0.13:30566/\nI0527 00:07:30.935583 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.935600 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.935617 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.936286 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.936303 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.936315 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.936325 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.936332 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.936356 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.940454 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.940475 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.940495 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.940986 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.941015 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.941028 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.941045 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.941054 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.941064 648 log.go:172] (0xc0006bf040) (5) Data frame sent\nI0527 00:07:30.941074 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.941083 648 log.go:172] (0xc0006bf040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.941103 648 log.go:172] (0xc0006bf040) (5) Data frame sent\nI0527 00:07:30.945677 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.945714 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.945752 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.946561 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.946581 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.946593 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.946608 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.946623 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.946648 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.951419 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.951447 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.951621 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.951854 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.951893 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.951907 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.951928 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.951946 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.951966 648 log.go:172] (0xc0006bf040) (5) Data frame sent\nI0527 00:07:30.951981 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.951992 648 log.go:172] (0xc0006bf040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.952015 648 log.go:172] (0xc0006bf040) (5) Data frame sent\nI0527 00:07:30.958004 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.958039 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.958065 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.958701 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.958720 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.958737 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.958820 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.958842 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.958866 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.962140 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.962155 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.962162 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.962609 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.962632 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.962651 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\nI0527 00:07:30.962754 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.962769 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.962779 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.962881 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.962902 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.962923 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.966385 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.966402 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.966409 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.967181 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.967228 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.967256 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.967308 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.967345 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.967369 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.971172 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.971187 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.971198 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.971626 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.971639 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.971648 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.971665 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.971685 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.971698 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.976717 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.976742 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.976763 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.977375 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.977397 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.977415 648 log.go:172] (0xc0006bf040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30566/\nI0527 00:07:30.977696 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.977732 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.977772 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.982792 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.982833 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.982870 648 log.go:172] (0xc000704000) (3) Data frame sent\nI0527 00:07:30.983351 648 log.go:172] (0xc000a57a20) Data frame received for 5\nI0527 00:07:30.983375 648 log.go:172] (0xc0006bf040) (5) Data frame handling\nI0527 00:07:30.983402 648 log.go:172] (0xc000a57a20) Data frame received for 3\nI0527 00:07:30.983419 648 log.go:172] (0xc000704000) (3) Data frame handling\nI0527 00:07:30.985000 648 log.go:172] (0xc000a57a20) Data frame received for 1\nI0527 00:07:30.985030 648 log.go:172] (0xc000b260a0) (1) Data frame handling\nI0527 00:07:30.985050 648 log.go:172] (0xc000b260a0) (1) Data frame sent\nI0527 00:07:30.985065 648 log.go:172] (0xc000a57a20) (0xc000b260a0) Stream removed, broadcasting: 1\nI0527 00:07:30.985085 648 log.go:172] (0xc000a57a20) Go away received\nI0527 00:07:30.985613 648 log.go:172] (0xc000a57a20) (0xc000b260a0) Stream removed, broadcasting: 1\nI0527 00:07:30.985636 648 log.go:172] (0xc000a57a20) (0xc000704000) Stream removed, broadcasting: 3\nI0527 00:07:30.985647 648 log.go:172] (0xc000a57a20) (0xc0006bf040) Stream removed, broadcasting: 5\n" May 27 00:07:30.990: INFO: stdout: "\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c\naffinity-nodeport-fw46c" May 27 00:07:30.990: INFO: Received response from host: May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Received response from host: affinity-nodeport-fw46c May 27 00:07:30.990: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-199, will wait for the garbage collector to delete the pods May 27 00:07:31.116: INFO: Deleting ReplicationController affinity-nodeport took: 6.672432ms May 27 00:07:31.816: INFO: Terminating ReplicationController affinity-nodeport pods took: 700.226361ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:07:45.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-199" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:36.946 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":85,"skipped":1644,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:07:45.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-975fa971-65d0-4a12-b192-1aa7b5a8c88d STEP: Creating a pod to test consume configMaps May 27 00:07:45.463: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc" in namespace "configmap-9342" to be "Succeeded or Failed" May 27 00:07:45.512: INFO: Pod "pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc": Phase="Pending", Reason="", readiness=false. Elapsed: 48.352372ms May 27 00:07:47.515: INFO: Pod "pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05209576s May 27 00:07:50.740: INFO: Pod "pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc": Phase="Running", Reason="", readiness=true. Elapsed: 5.276596585s May 27 00:07:52.744: INFO: Pod "pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.280776311s STEP: Saw pod success May 27 00:07:52.744: INFO: Pod "pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc" satisfied condition "Succeeded or Failed" May 27 00:07:52.747: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc container configmap-volume-test: STEP: delete the pod May 27 00:07:52.816: INFO: Waiting for pod pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc to disappear May 27 00:07:52.889: INFO: Pod pod-configmaps-6a0fa5e8-1f3f-49bb-b37c-10f34986d6dc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:07:52.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9342" for this suite. • [SLOW TEST:7.581 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1650,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:07:52.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:07:53.588: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:07:55.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134873, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134873, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134873, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134873, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:07:58.707: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:07:58.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3670-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:07:59.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4223" for this suite. STEP: Destroying namespace "webhook-4223-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.989 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":87,"skipped":1661,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:07:59.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 27 00:08:00.069: INFO: Waiting up to 5m0s for pod "downward-api-98ac4588-786f-45d2-94fd-ca3a6a3cdc77" in namespace "downward-api-8760" to be "Succeeded or Failed" May 27 00:08:00.080: INFO: Pod "downward-api-98ac4588-786f-45d2-94fd-ca3a6a3cdc77": Phase="Pending", Reason="", readiness=false. Elapsed: 10.651674ms May 27 00:08:02.255: INFO: Pod "downward-api-98ac4588-786f-45d2-94fd-ca3a6a3cdc77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185713695s May 27 00:08:04.273: INFO: Pod "downward-api-98ac4588-786f-45d2-94fd-ca3a6a3cdc77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.203261352s STEP: Saw pod success May 27 00:08:04.273: INFO: Pod "downward-api-98ac4588-786f-45d2-94fd-ca3a6a3cdc77" satisfied condition "Succeeded or Failed" May 27 00:08:04.276: INFO: Trying to get logs from node latest-worker pod downward-api-98ac4588-786f-45d2-94fd-ca3a6a3cdc77 container dapi-container: STEP: delete the pod May 27 00:08:04.317: INFO: Waiting for pod downward-api-98ac4588-786f-45d2-94fd-ca3a6a3cdc77 to disappear May 27 00:08:04.332: INFO: Pod downward-api-98ac4588-786f-45d2-94fd-ca3a6a3cdc77 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:08:04.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8760" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:08:04.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:08:08.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1190" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1694,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:08:08.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:08:12.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1492" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1723,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:08:12.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 27 00:08:13.483: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 27 00:08:15.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134893, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134893, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134893, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134893, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:08:17.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134893, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134893, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134893, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726134893, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:08:20.531: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:08:20.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:08:21.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2961" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.305 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":91,"skipped":1736,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:08:22.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 27 00:08:22.232: INFO: Waiting up to 5m0s for pod "pod-0dc3f0c5-7e4a-48fa-9b36-d83f84cbfec7" in namespace "emptydir-2287" to be "Succeeded or Failed" May 27 00:08:22.261: INFO: Pod "pod-0dc3f0c5-7e4a-48fa-9b36-d83f84cbfec7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.894826ms May 27 00:08:24.399: INFO: Pod "pod-0dc3f0c5-7e4a-48fa-9b36-d83f84cbfec7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166699801s May 27 00:08:26.405: INFO: Pod "pod-0dc3f0c5-7e4a-48fa-9b36-d83f84cbfec7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.172902663s STEP: Saw pod success May 27 00:08:26.405: INFO: Pod "pod-0dc3f0c5-7e4a-48fa-9b36-d83f84cbfec7" satisfied condition "Succeeded or Failed" May 27 00:08:26.408: INFO: Trying to get logs from node latest-worker pod pod-0dc3f0c5-7e4a-48fa-9b36-d83f84cbfec7 container test-container: STEP: delete the pod May 27 00:08:26.471: INFO: Waiting for pod pod-0dc3f0c5-7e4a-48fa-9b36-d83f84cbfec7 to disappear May 27 00:08:26.650: INFO: Pod pod-0dc3f0c5-7e4a-48fa-9b36-d83f84cbfec7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:08:26.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2287" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1736,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:08:26.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 27 00:08:26.834: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix030191144/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:08:26.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-865" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":93,"skipped":1741,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:08:26.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-676 May 27 00:08:31.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-676 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 27 00:08:31.479: INFO: stderr: "I0527 00:08:31.281931 687 log.go:172] (0xc000980790) (0xc0003095e0) Create stream\nI0527 00:08:31.281985 687 log.go:172] (0xc000980790) (0xc0003095e0) Stream added, broadcasting: 1\nI0527 00:08:31.284253 687 log.go:172] (0xc000980790) Reply frame received for 1\nI0527 00:08:31.284302 687 log.go:172] (0xc000980790) (0xc000678460) Create stream\nI0527 00:08:31.284317 687 log.go:172] (0xc000980790) (0xc000678460) Stream added, broadcasting: 3\nI0527 00:08:31.285334 687 log.go:172] (0xc000980790) Reply frame received for 3\nI0527 00:08:31.285355 687 log.go:172] (0xc000980790) (0xc000309860) Create stream\nI0527 00:08:31.285363 687 log.go:172] (0xc000980790) (0xc000309860) Stream added, broadcasting: 5\nI0527 00:08:31.286280 687 log.go:172] (0xc000980790) Reply frame received for 5\nI0527 00:08:31.404949 687 log.go:172] (0xc000980790) Data frame received for 5\nI0527 00:08:31.404971 687 log.go:172] (0xc000309860) (5) Data frame handling\nI0527 00:08:31.404984 687 log.go:172] (0xc000309860) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0527 00:08:31.471799 687 log.go:172] (0xc000980790) Data frame received for 3\nI0527 00:08:31.471830 687 log.go:172] (0xc000678460) (3) Data frame handling\nI0527 00:08:31.471845 687 log.go:172] (0xc000678460) (3) Data frame sent\nI0527 00:08:31.472156 687 log.go:172] (0xc000980790) Data frame received for 3\nI0527 00:08:31.472169 687 log.go:172] (0xc000678460) (3) Data frame handling\nI0527 00:08:31.472397 687 log.go:172] (0xc000980790) Data frame received for 5\nI0527 00:08:31.472411 687 log.go:172] (0xc000309860) (5) Data frame handling\nI0527 00:08:31.474886 687 log.go:172] (0xc000980790) Data frame received for 1\nI0527 00:08:31.474903 687 log.go:172] (0xc0003095e0) (1) Data frame handling\nI0527 00:08:31.474916 687 log.go:172] (0xc0003095e0) (1) Data frame sent\nI0527 00:08:31.474950 687 log.go:172] (0xc000980790) (0xc0003095e0) Stream removed, broadcasting: 1\nI0527 00:08:31.475010 687 log.go:172] (0xc000980790) Go away received\nI0527 00:08:31.475235 687 log.go:172] (0xc000980790) (0xc0003095e0) Stream removed, broadcasting: 1\nI0527 00:08:31.475248 687 log.go:172] (0xc000980790) (0xc000678460) Stream removed, broadcasting: 3\nI0527 00:08:31.475255 687 log.go:172] (0xc000980790) (0xc000309860) Stream removed, broadcasting: 5\n" May 27 00:08:31.480: INFO: stdout: "iptables" May 27 00:08:31.480: INFO: proxyMode: iptables May 27 00:08:31.502: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:08:31.532: INFO: Pod kube-proxy-mode-detector still exists May 27 00:08:33.532: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:08:33.537: INFO: Pod kube-proxy-mode-detector still exists May 27 00:08:35.532: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:08:35.536: INFO: Pod kube-proxy-mode-detector still exists May 27 00:08:37.532: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:08:37.536: INFO: Pod kube-proxy-mode-detector still exists May 27 00:08:39.532: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:08:39.536: INFO: Pod kube-proxy-mode-detector still exists May 27 00:08:41.532: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:08:41.536: INFO: Pod kube-proxy-mode-detector still exists May 27 00:08:43.532: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:08:43.536: INFO: Pod kube-proxy-mode-detector still exists May 27 00:08:45.532: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:08:45.536: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-676 STEP: creating replication controller affinity-nodeport-timeout in namespace services-676 I0527 00:08:45.617644 8 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-676, replica count: 3 I0527 00:08:48.668057 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:08:51.668259 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:08:54.668538 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 00:08:54.679: INFO: Creating new exec pod May 27 00:08:59.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-676 execpod-affinityhzt6w -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 27 00:08:59.951: INFO: stderr: "I0527 00:08:59.856274 707 log.go:172] (0xc00090ebb0) (0xc000640aa0) Create stream\nI0527 00:08:59.856333 707 log.go:172] (0xc00090ebb0) (0xc000640aa0) Stream added, broadcasting: 1\nI0527 00:08:59.859565 707 log.go:172] (0xc00090ebb0) Reply frame received for 1\nI0527 00:08:59.859610 707 log.go:172] (0xc00090ebb0) (0xc00023bf40) Create stream\nI0527 00:08:59.859621 707 log.go:172] (0xc00090ebb0) (0xc00023bf40) Stream added, broadcasting: 3\nI0527 00:08:59.861045 707 log.go:172] (0xc00090ebb0) Reply frame received for 3\nI0527 00:08:59.861085 707 log.go:172] (0xc00090ebb0) (0xc000396780) Create stream\nI0527 00:08:59.861097 707 log.go:172] (0xc00090ebb0) (0xc000396780) Stream added, broadcasting: 5\nI0527 00:08:59.862984 707 log.go:172] (0xc00090ebb0) Reply frame received for 5\nI0527 00:08:59.940752 707 log.go:172] (0xc00090ebb0) Data frame received for 5\nI0527 00:08:59.940795 707 log.go:172] (0xc000396780) (5) Data frame handling\nI0527 00:08:59.940831 707 log.go:172] (0xc000396780) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0527 00:08:59.942024 707 log.go:172] (0xc00090ebb0) Data frame received for 5\nI0527 00:08:59.942056 707 log.go:172] (0xc000396780) (5) Data frame handling\nI0527 00:08:59.942086 707 log.go:172] (0xc000396780) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0527 00:08:59.942264 707 log.go:172] (0xc00090ebb0) Data frame received for 5\nI0527 00:08:59.942302 707 log.go:172] (0xc000396780) (5) Data frame handling\nI0527 00:08:59.942402 707 log.go:172] (0xc00090ebb0) Data frame received for 3\nI0527 00:08:59.942423 707 log.go:172] (0xc00023bf40) (3) Data frame handling\nI0527 00:08:59.944079 707 log.go:172] (0xc00090ebb0) Data frame received for 1\nI0527 00:08:59.944111 707 log.go:172] (0xc000640aa0) (1) Data frame handling\nI0527 00:08:59.944143 707 log.go:172] (0xc000640aa0) (1) Data frame sent\nI0527 00:08:59.944177 707 log.go:172] (0xc00090ebb0) (0xc000640aa0) Stream removed, broadcasting: 1\nI0527 00:08:59.944226 707 log.go:172] (0xc00090ebb0) Go away received\nI0527 00:08:59.944653 707 log.go:172] (0xc00090ebb0) (0xc000640aa0) Stream removed, broadcasting: 1\nI0527 00:08:59.944678 707 log.go:172] (0xc00090ebb0) (0xc00023bf40) Stream removed, broadcasting: 3\nI0527 00:08:59.944691 707 log.go:172] (0xc00090ebb0) (0xc000396780) Stream removed, broadcasting: 5\n" May 27 00:08:59.951: INFO: stdout: "" May 27 00:08:59.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-676 execpod-affinityhzt6w -- /bin/sh -x -c nc -zv -t -w 2 10.107.187.64 80' May 27 00:09:00.148: INFO: stderr: "I0527 00:09:00.084290 728 log.go:172] (0xc000a79600) (0xc000b301e0) Create stream\nI0527 00:09:00.084349 728 log.go:172] (0xc000a79600) (0xc000b301e0) Stream added, broadcasting: 1\nI0527 00:09:00.089571 728 log.go:172] (0xc000a79600) Reply frame received for 1\nI0527 00:09:00.089611 728 log.go:172] (0xc000a79600) (0xc0008645a0) Create stream\nI0527 00:09:00.089625 728 log.go:172] (0xc000a79600) (0xc0008645a0) Stream added, broadcasting: 3\nI0527 00:09:00.090505 728 log.go:172] (0xc000a79600) Reply frame received for 3\nI0527 00:09:00.090550 728 log.go:172] (0xc000a79600) (0xc000538500) Create stream\nI0527 00:09:00.090563 728 log.go:172] (0xc000a79600) (0xc000538500) Stream added, broadcasting: 5\nI0527 00:09:00.091406 728 log.go:172] (0xc000a79600) Reply frame received for 5\nI0527 00:09:00.140407 728 log.go:172] (0xc000a79600) Data frame received for 5\nI0527 00:09:00.140446 728 log.go:172] (0xc000538500) (5) Data frame handling\nI0527 00:09:00.140460 728 log.go:172] (0xc000538500) (5) Data frame sent\nI0527 00:09:00.140496 728 log.go:172] (0xc000a79600) Data frame received for 5\nI0527 00:09:00.140506 728 log.go:172] (0xc000538500) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.187.64 80\nConnection to 10.107.187.64 80 port [tcp/http] succeeded!\nI0527 00:09:00.140530 728 log.go:172] (0xc000a79600) Data frame received for 3\nI0527 00:09:00.140548 728 log.go:172] (0xc0008645a0) (3) Data frame handling\nI0527 00:09:00.141964 728 log.go:172] (0xc000a79600) Data frame received for 1\nI0527 00:09:00.141983 728 log.go:172] (0xc000b301e0) (1) Data frame handling\nI0527 00:09:00.141997 728 log.go:172] (0xc000b301e0) (1) Data frame sent\nI0527 00:09:00.142015 728 log.go:172] (0xc000a79600) (0xc000b301e0) Stream removed, broadcasting: 1\nI0527 00:09:00.142029 728 log.go:172] (0xc000a79600) Go away received\nI0527 00:09:00.142352 728 log.go:172] (0xc000a79600) (0xc000b301e0) Stream removed, broadcasting: 1\nI0527 00:09:00.142370 728 log.go:172] (0xc000a79600) (0xc0008645a0) Stream removed, broadcasting: 3\nI0527 00:09:00.142379 728 log.go:172] (0xc000a79600) (0xc000538500) Stream removed, broadcasting: 5\n" May 27 00:09:00.148: INFO: stdout: "" May 27 00:09:00.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-676 execpod-affinityhzt6w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31080' May 27 00:09:00.386: INFO: stderr: "I0527 00:09:00.290828 749 log.go:172] (0xc0009a31e0) (0xc000b2c780) Create stream\nI0527 00:09:00.290876 749 log.go:172] (0xc0009a31e0) (0xc000b2c780) Stream added, broadcasting: 1\nI0527 00:09:00.295371 749 log.go:172] (0xc0009a31e0) Reply frame received for 1\nI0527 00:09:00.295414 749 log.go:172] (0xc0009a31e0) (0xc000524d20) Create stream\nI0527 00:09:00.295425 749 log.go:172] (0xc0009a31e0) (0xc000524d20) Stream added, broadcasting: 3\nI0527 00:09:00.296458 749 log.go:172] (0xc0009a31e0) Reply frame received for 3\nI0527 00:09:00.296494 749 log.go:172] (0xc0009a31e0) (0xc0000dcdc0) Create stream\nI0527 00:09:00.296504 749 log.go:172] (0xc0009a31e0) (0xc0000dcdc0) Stream added, broadcasting: 5\nI0527 00:09:00.297526 749 log.go:172] (0xc0009a31e0) Reply frame received for 5\nI0527 00:09:00.378141 749 log.go:172] (0xc0009a31e0) Data frame received for 3\nI0527 00:09:00.378174 749 log.go:172] (0xc000524d20) (3) Data frame handling\nI0527 00:09:00.378222 749 log.go:172] (0xc0009a31e0) Data frame received for 5\nI0527 00:09:00.378256 749 log.go:172] (0xc0000dcdc0) (5) Data frame handling\nI0527 00:09:00.378282 749 log.go:172] (0xc0000dcdc0) (5) Data frame sent\nI0527 00:09:00.378305 749 log.go:172] (0xc0009a31e0) Data frame received for 5\nI0527 00:09:00.378331 749 log.go:172] (0xc0000dcdc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31080\nConnection to 172.17.0.13 31080 port [tcp/31080] succeeded!\nI0527 00:09:00.380125 749 log.go:172] (0xc0009a31e0) Data frame received for 1\nI0527 00:09:00.380173 749 log.go:172] (0xc000b2c780) (1) Data frame handling\nI0527 00:09:00.380203 749 log.go:172] (0xc000b2c780) (1) Data frame sent\nI0527 00:09:00.380229 749 log.go:172] (0xc0009a31e0) (0xc000b2c780) Stream removed, broadcasting: 1\nI0527 00:09:00.380254 749 log.go:172] (0xc0009a31e0) Go away received\nI0527 00:09:00.380698 749 log.go:172] (0xc0009a31e0) (0xc000b2c780) Stream removed, broadcasting: 1\nI0527 00:09:00.380733 749 log.go:172] (0xc0009a31e0) (0xc000524d20) Stream removed, broadcasting: 3\nI0527 00:09:00.380748 749 log.go:172] (0xc0009a31e0) (0xc0000dcdc0) Stream removed, broadcasting: 5\n" May 27 00:09:00.386: INFO: stdout: "" May 27 00:09:00.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-676 execpod-affinityhzt6w -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31080' May 27 00:09:00.613: INFO: stderr: "I0527 00:09:00.519204 771 log.go:172] (0xc000a8ae70) (0xc000afe1e0) Create stream\nI0527 00:09:00.519276 771 log.go:172] (0xc000a8ae70) (0xc000afe1e0) Stream added, broadcasting: 1\nI0527 00:09:00.530740 771 log.go:172] (0xc000a8ae70) Reply frame received for 1\nI0527 00:09:00.530798 771 log.go:172] (0xc000a8ae70) (0xc000714be0) Create stream\nI0527 00:09:00.530818 771 log.go:172] (0xc000a8ae70) (0xc000714be0) Stream added, broadcasting: 3\nI0527 00:09:00.532132 771 log.go:172] (0xc000a8ae70) Reply frame received for 3\nI0527 00:09:00.532206 771 log.go:172] (0xc000a8ae70) (0xc0007150e0) Create stream\nI0527 00:09:00.532261 771 log.go:172] (0xc000a8ae70) (0xc0007150e0) Stream added, broadcasting: 5\nI0527 00:09:00.533624 771 log.go:172] (0xc000a8ae70) Reply frame received for 5\nI0527 00:09:00.606326 771 log.go:172] (0xc000a8ae70) Data frame received for 3\nI0527 00:09:00.606383 771 log.go:172] (0xc000714be0) (3) Data frame handling\nI0527 00:09:00.606422 771 log.go:172] (0xc000a8ae70) Data frame received for 5\nI0527 00:09:00.606444 771 log.go:172] (0xc0007150e0) (5) Data frame handling\nI0527 00:09:00.606473 771 log.go:172] (0xc0007150e0) (5) Data frame sent\nI0527 00:09:00.606495 771 log.go:172] (0xc000a8ae70) Data frame received for 5\nI0527 00:09:00.606514 771 log.go:172] (0xc0007150e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31080\nConnection to 172.17.0.12 31080 port [tcp/31080] succeeded!\nI0527 00:09:00.607702 771 log.go:172] (0xc000a8ae70) Data frame received for 1\nI0527 00:09:00.607739 771 log.go:172] (0xc000afe1e0) (1) Data frame handling\nI0527 00:09:00.607760 771 log.go:172] (0xc000afe1e0) (1) Data frame sent\nI0527 00:09:00.607772 771 log.go:172] (0xc000a8ae70) (0xc000afe1e0) Stream removed, broadcasting: 1\nI0527 00:09:00.607862 771 log.go:172] (0xc000a8ae70) Go away received\nI0527 00:09:00.608133 771 log.go:172] (0xc000a8ae70) (0xc000afe1e0) Stream removed, broadcasting: 1\nI0527 00:09:00.608149 771 log.go:172] (0xc000a8ae70) (0xc000714be0) Stream removed, broadcasting: 3\nI0527 00:09:00.608159 771 log.go:172] (0xc000a8ae70) (0xc0007150e0) Stream removed, broadcasting: 5\n" May 27 00:09:00.613: INFO: stdout: "" May 27 00:09:00.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-676 execpod-affinityhzt6w -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31080/ ; done' May 27 00:09:00.893: INFO: stderr: "I0527 00:09:00.740236 794 log.go:172] (0xc000b72580) (0xc0004da0a0) Create stream\nI0527 00:09:00.740289 794 log.go:172] (0xc000b72580) (0xc0004da0a0) Stream added, broadcasting: 1\nI0527 00:09:00.742851 794 log.go:172] (0xc000b72580) Reply frame received for 1\nI0527 00:09:00.742917 794 log.go:172] (0xc000b72580) (0xc00023a320) Create stream\nI0527 00:09:00.742939 794 log.go:172] (0xc000b72580) (0xc00023a320) Stream added, broadcasting: 3\nI0527 00:09:00.743701 794 log.go:172] (0xc000b72580) Reply frame received for 3\nI0527 00:09:00.743729 794 log.go:172] (0xc000b72580) (0xc0004da140) Create stream\nI0527 00:09:00.743740 794 log.go:172] (0xc000b72580) (0xc0004da140) Stream added, broadcasting: 5\nI0527 00:09:00.744446 794 log.go:172] (0xc000b72580) Reply frame received for 5\nI0527 00:09:00.799808 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.799840 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.799853 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.799869 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.799876 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.799885 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.805556 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.805586 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.805608 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.806035 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.806050 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.806065 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.806077 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.806082 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.806088 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.810745 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.810763 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.810775 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.811308 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.811342 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.811355 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.811389 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.811412 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.811432 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.815225 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.815253 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.815286 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.815683 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.815723 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.815733 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.815747 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.815757 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.815763 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.819652 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.819672 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.819711 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.820123 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.820145 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.820155 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.820167 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.820172 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.820178 794 log.go:172] (0xc0004da140) (5) Data frame sent\nI0527 00:09:00.820183 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.820188 794 log.go:172] (0xc0004da140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.820202 794 log.go:172] (0xc0004da140) (5) Data frame sent\nI0527 00:09:00.826644 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.826662 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.826673 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.827490 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.827507 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.827526 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.827553 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.827569 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.827592 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.831557 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.831572 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.831580 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.832178 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.832197 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.832209 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.832222 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.832230 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.832244 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.836547 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.836565 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.836577 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.837036 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.837063 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.837092 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.837103 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.837287 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.837305 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.841932 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.841951 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.841968 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.842584 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.842605 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.842616 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.842649 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.842676 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.842700 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.846600 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.846619 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.846637 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.846977 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.847006 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.847018 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.847034 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.847049 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.847058 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.851598 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.851630 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.851805 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.851998 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.852029 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.852037 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.852055 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.852082 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.852099 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.856457 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.856479 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.856681 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.857404 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.857425 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.857436 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.857457 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.857471 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.857488 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.861397 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.861434 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.861460 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.861893 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.861914 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.861989 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.862031 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.862063 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.862088 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.867638 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.867652 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.867659 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.868361 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.868381 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.868412 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.868423 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.868436 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.868445 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.872305 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.872317 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.872333 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.872746 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.872766 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.872778 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0527 00:09:00.872792 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.872845 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.872874 794 log.go:172] (0xc0004da140) (5) Data frame sent\nI0527 00:09:00.872903 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.872924 794 log.go:172] (0xc00023a320) (3) Data frame handling\n http://172.17.0.13:31080/\nI0527 00:09:00.872947 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.880071 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.880096 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.880115 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.880834 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.880849 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.880857 794 log.go:172] (0xc0004da140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:00.880881 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.880918 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.880949 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.886167 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.886188 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.886198 794 log.go:172] (0xc00023a320) (3) Data frame sent\nI0527 00:09:00.887239 794 log.go:172] (0xc000b72580) Data frame received for 3\nI0527 00:09:00.887270 794 log.go:172] (0xc00023a320) (3) Data frame handling\nI0527 00:09:00.887353 794 log.go:172] (0xc000b72580) Data frame received for 5\nI0527 00:09:00.887371 794 log.go:172] (0xc0004da140) (5) Data frame handling\nI0527 00:09:00.889495 794 log.go:172] (0xc000b72580) Data frame received for 1\nI0527 00:09:00.889515 794 log.go:172] (0xc0004da0a0) (1) Data frame handling\nI0527 00:09:00.889530 794 log.go:172] (0xc0004da0a0) (1) Data frame sent\nI0527 00:09:00.889543 794 log.go:172] (0xc000b72580) (0xc0004da0a0) Stream removed, broadcasting: 1\nI0527 00:09:00.889645 794 log.go:172] (0xc000b72580) Go away received\nI0527 00:09:00.889878 794 log.go:172] (0xc000b72580) (0xc0004da0a0) Stream removed, broadcasting: 1\nI0527 00:09:00.889895 794 log.go:172] (0xc000b72580) (0xc00023a320) Stream removed, broadcasting: 3\nI0527 00:09:00.889905 794 log.go:172] (0xc000b72580) (0xc0004da140) Stream removed, broadcasting: 5\n" May 27 00:09:00.894: INFO: stdout: "\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4\naffinity-nodeport-timeout-j4mh4" May 27 00:09:00.894: INFO: Received response from host: May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Received response from host: affinity-nodeport-timeout-j4mh4 May 27 00:09:00.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-676 execpod-affinityhzt6w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31080/' May 27 00:09:01.106: INFO: stderr: "I0527 00:09:01.022363 814 log.go:172] (0xc00003ad10) (0xc000138fa0) Create stream\nI0527 00:09:01.022439 814 log.go:172] (0xc00003ad10) (0xc000138fa0) Stream added, broadcasting: 1\nI0527 00:09:01.024735 814 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0527 00:09:01.024766 814 log.go:172] (0xc00003ad10) (0xc00052e460) Create stream\nI0527 00:09:01.024775 814 log.go:172] (0xc00003ad10) (0xc00052e460) Stream added, broadcasting: 3\nI0527 00:09:01.026025 814 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0527 00:09:01.026070 814 log.go:172] (0xc00003ad10) (0xc000139b80) Create stream\nI0527 00:09:01.026091 814 log.go:172] (0xc00003ad10) (0xc000139b80) Stream added, broadcasting: 5\nI0527 00:09:01.026974 814 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0527 00:09:01.092993 814 log.go:172] (0xc00003ad10) Data frame received for 5\nI0527 00:09:01.093022 814 log.go:172] (0xc000139b80) (5) Data frame handling\nI0527 00:09:01.093043 814 log.go:172] (0xc000139b80) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:01.098011 814 log.go:172] (0xc00003ad10) Data frame received for 3\nI0527 00:09:01.098050 814 log.go:172] (0xc00052e460) (3) Data frame handling\nI0527 00:09:01.098078 814 log.go:172] (0xc00052e460) (3) Data frame sent\nI0527 00:09:01.098644 814 log.go:172] (0xc00003ad10) Data frame received for 3\nI0527 00:09:01.098673 814 log.go:172] (0xc00052e460) (3) Data frame handling\nI0527 00:09:01.098831 814 log.go:172] (0xc00003ad10) Data frame received for 5\nI0527 00:09:01.098857 814 log.go:172] (0xc000139b80) (5) Data frame handling\nI0527 00:09:01.100688 814 log.go:172] (0xc00003ad10) Data frame received for 1\nI0527 00:09:01.100728 814 log.go:172] (0xc000138fa0) (1) Data frame handling\nI0527 00:09:01.100760 814 log.go:172] (0xc000138fa0) (1) Data frame sent\nI0527 00:09:01.100818 814 log.go:172] (0xc00003ad10) (0xc000138fa0) Stream removed, broadcasting: 1\nI0527 00:09:01.100869 814 log.go:172] (0xc00003ad10) Go away received\nI0527 00:09:01.101449 814 log.go:172] (0xc00003ad10) (0xc000138fa0) Stream removed, broadcasting: 1\nI0527 00:09:01.101469 814 log.go:172] (0xc00003ad10) (0xc00052e460) Stream removed, broadcasting: 3\nI0527 00:09:01.101478 814 log.go:172] (0xc00003ad10) (0xc000139b80) Stream removed, broadcasting: 5\n" May 27 00:09:01.106: INFO: stdout: "affinity-nodeport-timeout-j4mh4" May 27 00:09:16.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-676 execpod-affinityhzt6w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31080/' May 27 00:09:16.358: INFO: stderr: "I0527 00:09:16.242595 836 log.go:172] (0xc000c0cd10) (0xc0006edf40) Create stream\nI0527 00:09:16.242662 836 log.go:172] (0xc000c0cd10) (0xc0006edf40) Stream added, broadcasting: 1\nI0527 00:09:16.244741 836 log.go:172] (0xc000c0cd10) Reply frame received for 1\nI0527 00:09:16.244785 836 log.go:172] (0xc000c0cd10) (0xc0006f8f00) Create stream\nI0527 00:09:16.244797 836 log.go:172] (0xc000c0cd10) (0xc0006f8f00) Stream added, broadcasting: 3\nI0527 00:09:16.246054 836 log.go:172] (0xc000c0cd10) Reply frame received for 3\nI0527 00:09:16.246091 836 log.go:172] (0xc000c0cd10) (0xc000673cc0) Create stream\nI0527 00:09:16.246103 836 log.go:172] (0xc000c0cd10) (0xc000673cc0) Stream added, broadcasting: 5\nI0527 00:09:16.247138 836 log.go:172] (0xc000c0cd10) Reply frame received for 5\nI0527 00:09:16.348008 836 log.go:172] (0xc000c0cd10) Data frame received for 5\nI0527 00:09:16.348030 836 log.go:172] (0xc000673cc0) (5) Data frame handling\nI0527 00:09:16.348042 836 log.go:172] (0xc000673cc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31080/\nI0527 00:09:16.350715 836 log.go:172] (0xc000c0cd10) Data frame received for 3\nI0527 00:09:16.350738 836 log.go:172] (0xc0006f8f00) (3) Data frame handling\nI0527 00:09:16.350757 836 log.go:172] (0xc0006f8f00) (3) Data frame sent\nI0527 00:09:16.351328 836 log.go:172] (0xc000c0cd10) Data frame received for 3\nI0527 00:09:16.351363 836 log.go:172] (0xc0006f8f00) (3) Data frame handling\nI0527 00:09:16.351385 836 log.go:172] (0xc000c0cd10) Data frame received for 5\nI0527 00:09:16.351394 836 log.go:172] (0xc000673cc0) (5) Data frame handling\nI0527 00:09:16.352673 836 log.go:172] (0xc000c0cd10) Data frame received for 1\nI0527 00:09:16.352692 836 log.go:172] (0xc0006edf40) (1) Data frame handling\nI0527 00:09:16.352703 836 log.go:172] (0xc0006edf40) (1) Data frame sent\nI0527 00:09:16.352719 836 log.go:172] (0xc000c0cd10) (0xc0006edf40) Stream removed, broadcasting: 1\nI0527 00:09:16.352991 836 log.go:172] (0xc000c0cd10) (0xc0006edf40) Stream removed, broadcasting: 1\nI0527 00:09:16.353006 836 log.go:172] (0xc000c0cd10) (0xc0006f8f00) Stream removed, broadcasting: 3\nI0527 00:09:16.353016 836 log.go:172] (0xc000c0cd10) (0xc000673cc0) Stream removed, broadcasting: 5\n" May 27 00:09:16.358: INFO: stdout: "affinity-nodeport-timeout-zdpwh" May 27 00:09:16.358: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-676, will wait for the garbage collector to delete the pods May 27 00:09:16.494: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.296225ms May 27 00:09:17.094: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.249804ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:09:25.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-676" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:58.436 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":94,"skipped":1752,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:09:25.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:09:25.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49d5c29a-2e8c-48d8-b711-3e730da18c75" in namespace "downward-api-9830" to be "Succeeded or Failed" May 27 00:09:25.461: INFO: Pod "downwardapi-volume-49d5c29a-2e8c-48d8-b711-3e730da18c75": Phase="Pending", Reason="", readiness=false. Elapsed: 26.946902ms May 27 00:09:27.465: INFO: Pod "downwardapi-volume-49d5c29a-2e8c-48d8-b711-3e730da18c75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031033028s May 27 00:09:29.470: INFO: Pod "downwardapi-volume-49d5c29a-2e8c-48d8-b711-3e730da18c75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035471108s STEP: Saw pod success May 27 00:09:29.470: INFO: Pod "downwardapi-volume-49d5c29a-2e8c-48d8-b711-3e730da18c75" satisfied condition "Succeeded or Failed" May 27 00:09:29.473: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-49d5c29a-2e8c-48d8-b711-3e730da18c75 container client-container: STEP: delete the pod May 27 00:09:29.510: INFO: Waiting for pod downwardapi-volume-49d5c29a-2e8c-48d8-b711-3e730da18c75 to disappear May 27 00:09:29.516: INFO: Pod downwardapi-volume-49d5c29a-2e8c-48d8-b711-3e730da18c75 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:09:29.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9830" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":95,"skipped":1764,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:09:29.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-526f9d9d-7100-4214-a2b5-860e6de5e66a STEP: Creating a pod to test consume configMaps May 27 00:09:29.823: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a20eda07-f7bf-436b-b5b1-b639d2bdb76e" in namespace "projected-5914" to be "Succeeded or Failed" May 27 00:09:29.840: INFO: Pod "pod-projected-configmaps-a20eda07-f7bf-436b-b5b1-b639d2bdb76e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.026117ms May 27 00:09:31.957: INFO: Pod "pod-projected-configmaps-a20eda07-f7bf-436b-b5b1-b639d2bdb76e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133854187s May 27 00:09:33.961: INFO: Pod "pod-projected-configmaps-a20eda07-f7bf-436b-b5b1-b639d2bdb76e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137707793s STEP: Saw pod success May 27 00:09:33.961: INFO: Pod "pod-projected-configmaps-a20eda07-f7bf-436b-b5b1-b639d2bdb76e" satisfied condition "Succeeded or Failed" May 27 00:09:33.963: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a20eda07-f7bf-436b-b5b1-b639d2bdb76e container projected-configmap-volume-test: STEP: delete the pod May 27 00:09:34.019: INFO: Waiting for pod pod-projected-configmaps-a20eda07-f7bf-436b-b5b1-b639d2bdb76e to disappear May 27 00:09:34.023: INFO: Pod pod-projected-configmaps-a20eda07-f7bf-436b-b5b1-b639d2bdb76e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:09:34.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5914" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":96,"skipped":1771,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:09:34.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:09:34.111: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:09:35.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5759" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":97,"skipped":1772,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:09:35.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-54e982c3-da20-45f2-bafd-c15d365b66da STEP: Creating a pod to test consume configMaps May 27 00:09:35.414: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd482455-d860-41ee-91b0-f00b2a6982f7" in namespace "projected-971" to be "Succeeded or Failed" May 27 00:09:35.472: INFO: Pod "pod-projected-configmaps-fd482455-d860-41ee-91b0-f00b2a6982f7": Phase="Pending", Reason="", readiness=false. Elapsed: 57.762147ms May 27 00:09:37.476: INFO: Pod "pod-projected-configmaps-fd482455-d860-41ee-91b0-f00b2a6982f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06193432s May 27 00:09:39.501: INFO: Pod "pod-projected-configmaps-fd482455-d860-41ee-91b0-f00b2a6982f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086696226s STEP: Saw pod success May 27 00:09:39.501: INFO: Pod "pod-projected-configmaps-fd482455-d860-41ee-91b0-f00b2a6982f7" satisfied condition "Succeeded or Failed" May 27 00:09:39.504: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-fd482455-d860-41ee-91b0-f00b2a6982f7 container projected-configmap-volume-test: STEP: delete the pod May 27 00:09:39.543: INFO: Waiting for pod pod-projected-configmaps-fd482455-d860-41ee-91b0-f00b2a6982f7 to disappear May 27 00:09:39.565: INFO: Pod pod-projected-configmaps-fd482455-d860-41ee-91b0-f00b2a6982f7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:09:39.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-971" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":98,"skipped":1772,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:09:39.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:09:46.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3781" for this suite. • [SLOW TEST:7.343 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":99,"skipped":1782,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:09:46.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:09:58.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1347" for this suite. • [SLOW TEST:11.156 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":100,"skipped":1786,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:09:58.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 27 00:09:58.162: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:10:05.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8976" for this suite. • [SLOW TEST:7.404 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":101,"skipped":1792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:10:05.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:10:05.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7628b7e-6033-49cf-af39-16534e363bb3" in namespace "downward-api-3797" to be "Succeeded or Failed" May 27 00:10:05.549: INFO: Pod "downwardapi-volume-b7628b7e-6033-49cf-af39-16534e363bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638611ms May 27 00:10:07.555: INFO: Pod "downwardapi-volume-b7628b7e-6033-49cf-af39-16534e363bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016151973s May 27 00:10:09.560: INFO: Pod "downwardapi-volume-b7628b7e-6033-49cf-af39-16534e363bb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020722617s STEP: Saw pod success May 27 00:10:09.560: INFO: Pod "downwardapi-volume-b7628b7e-6033-49cf-af39-16534e363bb3" satisfied condition "Succeeded or Failed" May 27 00:10:09.563: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b7628b7e-6033-49cf-af39-16534e363bb3 container client-container: STEP: delete the pod May 27 00:10:09.610: INFO: Waiting for pod downwardapi-volume-b7628b7e-6033-49cf-af39-16534e363bb3 to disappear May 27 00:10:09.632: INFO: Pod downwardapi-volume-b7628b7e-6033-49cf-af39-16534e363bb3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:10:09.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3797" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:10:09.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 27 00:10:09.699: INFO: Waiting up to 5m0s for pod "downward-api-6abaf80b-b2f0-4b58-a814-b010544571f1" in namespace "downward-api-7072" to be "Succeeded or Failed" May 27 00:10:09.721: INFO: Pod "downward-api-6abaf80b-b2f0-4b58-a814-b010544571f1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.38974ms May 27 00:10:11.725: INFO: Pod "downward-api-6abaf80b-b2f0-4b58-a814-b010544571f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025900443s May 27 00:10:13.730: INFO: Pod "downward-api-6abaf80b-b2f0-4b58-a814-b010544571f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030401614s STEP: Saw pod success May 27 00:10:13.730: INFO: Pod "downward-api-6abaf80b-b2f0-4b58-a814-b010544571f1" satisfied condition "Succeeded or Failed" May 27 00:10:13.733: INFO: Trying to get logs from node latest-worker2 pod downward-api-6abaf80b-b2f0-4b58-a814-b010544571f1 container dapi-container: STEP: delete the pod May 27 00:10:13.826: INFO: Waiting for pod downward-api-6abaf80b-b2f0-4b58-a814-b010544571f1 to disappear May 27 00:10:13.932: INFO: Pod downward-api-6abaf80b-b2f0-4b58-a814-b010544571f1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:10:13.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7072" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1858,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:10:13.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 27 00:10:18.526: INFO: Successfully updated pod "labelsupdate83b1372c-905c-41ca-9a8c-f6cf4be8ccaf" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:10:20.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1259" for this suite. • [SLOW TEST:6.622 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1863,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:10:20.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 27 00:10:20.613: INFO: PodSpec: initContainers in spec.initContainers May 27 00:11:13.340: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-40e608f3-cc41-45b6-a6f1-de01d210ac76", GenerateName:"", Namespace:"init-container-6978", SelfLink:"/api/v1/namespaces/init-container-6978/pods/pod-init-40e608f3-cc41-45b6-a6f1-de01d210ac76", UID:"a269721b-3602-455c-9c76-14982446c306", ResourceVersion:"7945824", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726135020, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"613475698"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003aa4040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003aa4060)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003aa4080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003aa40a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lc9fq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000e460c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lc9fq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lc9fq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lc9fq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003656098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e90000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003656120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003656140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003656148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00365614c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135020, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135020, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135020, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135020, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.122", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.122"}}, StartTime:(*v1.Time)(0xc003aa40c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e900e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e90150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3142a236bd7257c77fe1006880f6297ed68593519c1560a1317d92dcdd1eea96", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003aa4100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003aa40e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0036561cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:11:13.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6978" for this suite. • [SLOW TEST:52.811 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":105,"skipped":1869,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:11:13.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:11:13.480: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcf5a6d9-5d9c-4926-8948-88ab9a9276cb" in namespace "downward-api-2305" to be "Succeeded or Failed" May 27 00:11:13.504: INFO: Pod "downwardapi-volume-bcf5a6d9-5d9c-4926-8948-88ab9a9276cb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.386776ms May 27 00:11:15.508: INFO: Pod "downwardapi-volume-bcf5a6d9-5d9c-4926-8948-88ab9a9276cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028686394s May 27 00:11:17.512: INFO: Pod "downwardapi-volume-bcf5a6d9-5d9c-4926-8948-88ab9a9276cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032416146s STEP: Saw pod success May 27 00:11:17.512: INFO: Pod "downwardapi-volume-bcf5a6d9-5d9c-4926-8948-88ab9a9276cb" satisfied condition "Succeeded or Failed" May 27 00:11:17.537: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bcf5a6d9-5d9c-4926-8948-88ab9a9276cb container client-container: STEP: delete the pod May 27 00:11:17.584: INFO: Waiting for pod downwardapi-volume-bcf5a6d9-5d9c-4926-8948-88ab9a9276cb to disappear May 27 00:11:17.592: INFO: Pod downwardapi-volume-bcf5a6d9-5d9c-4926-8948-88ab9a9276cb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:11:17.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2305" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":106,"skipped":1971,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:11:17.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0527 00:11:27.730028 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 27 00:11:27.730: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:11:27.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2096" for this suite. • [SLOW TEST:10.139 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":107,"skipped":1974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:11:27.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3072 May 27 00:11:31.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3072 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 27 00:11:32.100: INFO: stderr: "I0527 00:11:31.982742 857 log.go:172] (0xc000b89130) (0xc000a003c0) Create stream\nI0527 00:11:31.983123 857 log.go:172] (0xc000b89130) (0xc000a003c0) Stream added, broadcasting: 1\nI0527 00:11:31.988058 857 log.go:172] (0xc000b89130) Reply frame received for 1\nI0527 00:11:31.988135 857 log.go:172] (0xc000b89130) (0xc000510d20) Create stream\nI0527 00:11:31.988161 857 log.go:172] (0xc000b89130) (0xc000510d20) Stream added, broadcasting: 3\nI0527 00:11:31.989081 857 log.go:172] (0xc000b89130) Reply frame received for 3\nI0527 00:11:31.989302 857 log.go:172] (0xc000b89130) (0xc0004e6460) Create stream\nI0527 00:11:31.989321 857 log.go:172] (0xc000b89130) (0xc0004e6460) Stream added, broadcasting: 5\nI0527 00:11:31.990377 857 log.go:172] (0xc000b89130) Reply frame received for 5\nI0527 00:11:32.090543 857 log.go:172] (0xc000b89130) Data frame received for 5\nI0527 00:11:32.090579 857 log.go:172] (0xc0004e6460) (5) Data frame handling\nI0527 00:11:32.090606 857 log.go:172] (0xc0004e6460) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0527 00:11:32.092813 857 log.go:172] (0xc000b89130) Data frame received for 3\nI0527 00:11:32.092837 857 log.go:172] (0xc000510d20) (3) Data frame handling\nI0527 00:11:32.092853 857 log.go:172] (0xc000510d20) (3) Data frame sent\nI0527 00:11:32.093692 857 log.go:172] (0xc000b89130) Data frame received for 5\nI0527 00:11:32.093714 857 log.go:172] (0xc0004e6460) (5) Data frame handling\nI0527 00:11:32.093799 857 log.go:172] (0xc000b89130) Data frame received for 3\nI0527 00:11:32.093813 857 log.go:172] (0xc000510d20) (3) Data frame handling\nI0527 00:11:32.095438 857 log.go:172] (0xc000b89130) Data frame received for 1\nI0527 00:11:32.095450 857 log.go:172] (0xc000a003c0) (1) Data frame handling\nI0527 00:11:32.095462 857 log.go:172] (0xc000a003c0) (1) Data frame sent\nI0527 00:11:32.095472 857 log.go:172] (0xc000b89130) (0xc000a003c0) Stream removed, broadcasting: 1\nI0527 00:11:32.095558 857 log.go:172] (0xc000b89130) Go away received\nI0527 00:11:32.095888 857 log.go:172] (0xc000b89130) (0xc000a003c0) Stream removed, broadcasting: 1\nI0527 00:11:32.095913 857 log.go:172] (0xc000b89130) (0xc000510d20) Stream removed, broadcasting: 3\nI0527 00:11:32.095926 857 log.go:172] (0xc000b89130) (0xc0004e6460) Stream removed, broadcasting: 5\n" May 27 00:11:32.100: INFO: stdout: "iptables" May 27 00:11:32.100: INFO: proxyMode: iptables May 27 00:11:32.154: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:11:32.179: INFO: Pod kube-proxy-mode-detector still exists May 27 00:11:34.179: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:11:34.581: INFO: Pod kube-proxy-mode-detector still exists May 27 00:11:36.179: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:11:36.184: INFO: Pod kube-proxy-mode-detector still exists May 27 00:11:38.179: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:11:38.184: INFO: Pod kube-proxy-mode-detector still exists May 27 00:11:40.179: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:11:40.184: INFO: Pod kube-proxy-mode-detector still exists May 27 00:11:42.179: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:11:42.209: INFO: Pod kube-proxy-mode-detector still exists May 27 00:11:44.179: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:11:44.203: INFO: Pod kube-proxy-mode-detector still exists May 27 00:11:46.179: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 27 00:11:46.184: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3072 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3072 I0527 00:11:46.275297 8 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3072, replica count: 3 I0527 00:11:49.325690 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:11:52.325921 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 00:11:52.333: INFO: Creating new exec pod May 27 00:11:57.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3072 execpod-affinitydlm9c -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 27 00:11:57.603: INFO: stderr: "I0527 00:11:57.490779 878 log.go:172] (0xc000ac8fd0) (0xc000b90280) Create stream\nI0527 00:11:57.490846 878 log.go:172] (0xc000ac8fd0) (0xc000b90280) Stream added, broadcasting: 1\nI0527 00:11:57.494797 878 log.go:172] (0xc000ac8fd0) Reply frame received for 1\nI0527 00:11:57.494894 878 log.go:172] (0xc000ac8fd0) (0xc00082a3c0) Create stream\nI0527 00:11:57.494914 878 log.go:172] (0xc000ac8fd0) (0xc00082a3c0) Stream added, broadcasting: 3\nI0527 00:11:57.495751 878 log.go:172] (0xc000ac8fd0) Reply frame received for 3\nI0527 00:11:57.495791 878 log.go:172] (0xc000ac8fd0) (0xc00081a0a0) Create stream\nI0527 00:11:57.495805 878 log.go:172] (0xc000ac8fd0) (0xc00081a0a0) Stream added, broadcasting: 5\nI0527 00:11:57.496727 878 log.go:172] (0xc000ac8fd0) Reply frame received for 5\nI0527 00:11:57.582967 878 log.go:172] (0xc000ac8fd0) Data frame received for 5\nI0527 00:11:57.582999 878 log.go:172] (0xc00081a0a0) (5) Data frame handling\nI0527 00:11:57.583028 878 log.go:172] (0xc00081a0a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0527 00:11:57.595729 878 log.go:172] (0xc000ac8fd0) Data frame received for 5\nI0527 00:11:57.595769 878 log.go:172] (0xc00081a0a0) (5) Data frame handling\nI0527 00:11:57.595795 878 log.go:172] (0xc00081a0a0) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0527 00:11:57.595989 878 log.go:172] (0xc000ac8fd0) Data frame received for 3\nI0527 00:11:57.596026 878 log.go:172] (0xc00082a3c0) (3) Data frame handling\nI0527 00:11:57.596175 878 log.go:172] (0xc000ac8fd0) Data frame received for 5\nI0527 00:11:57.596202 878 log.go:172] (0xc00081a0a0) (5) Data frame handling\nI0527 00:11:57.598111 878 log.go:172] (0xc000ac8fd0) Data frame received for 1\nI0527 00:11:57.598131 878 log.go:172] (0xc000b90280) (1) Data frame handling\nI0527 00:11:57.598146 878 log.go:172] (0xc000b90280) (1) Data frame sent\nI0527 00:11:57.598162 878 log.go:172] (0xc000ac8fd0) (0xc000b90280) Stream removed, broadcasting: 1\nI0527 00:11:57.598174 878 log.go:172] (0xc000ac8fd0) Go away received\nI0527 00:11:57.598638 878 log.go:172] (0xc000ac8fd0) (0xc000b90280) Stream removed, broadcasting: 1\nI0527 00:11:57.598657 878 log.go:172] (0xc000ac8fd0) (0xc00082a3c0) Stream removed, broadcasting: 3\nI0527 00:11:57.598668 878 log.go:172] (0xc000ac8fd0) (0xc00081a0a0) Stream removed, broadcasting: 5\n" May 27 00:11:57.603: INFO: stdout: "" May 27 00:11:57.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3072 execpod-affinitydlm9c -- /bin/sh -x -c nc -zv -t -w 2 10.97.11.35 80' May 27 00:11:57.796: INFO: stderr: "I0527 00:11:57.722558 898 log.go:172] (0xc000684160) (0xc000325540) Create stream\nI0527 00:11:57.722606 898 log.go:172] (0xc000684160) (0xc000325540) Stream added, broadcasting: 1\nI0527 00:11:57.724233 898 log.go:172] (0xc000684160) Reply frame received for 1\nI0527 00:11:57.724261 898 log.go:172] (0xc000684160) (0xc000325b80) Create stream\nI0527 00:11:57.724270 898 log.go:172] (0xc000684160) (0xc000325b80) Stream added, broadcasting: 3\nI0527 00:11:57.724951 898 log.go:172] (0xc000684160) Reply frame received for 3\nI0527 00:11:57.724987 898 log.go:172] (0xc000684160) (0xc0000dcfa0) Create stream\nI0527 00:11:57.724995 898 log.go:172] (0xc000684160) (0xc0000dcfa0) Stream added, broadcasting: 5\nI0527 00:11:57.725770 898 log.go:172] (0xc000684160) Reply frame received for 5\nI0527 00:11:57.788410 898 log.go:172] (0xc000684160) Data frame received for 3\nI0527 00:11:57.788457 898 log.go:172] (0xc000325b80) (3) Data frame handling\nI0527 00:11:57.788735 898 log.go:172] (0xc000684160) Data frame received for 5\nI0527 00:11:57.788764 898 log.go:172] (0xc0000dcfa0) (5) Data frame handling\nI0527 00:11:57.788817 898 log.go:172] (0xc0000dcfa0) (5) Data frame sent\nI0527 00:11:57.788841 898 log.go:172] (0xc000684160) Data frame received for 5\nI0527 00:11:57.788862 898 log.go:172] (0xc0000dcfa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.11.35 80\nConnection to 10.97.11.35 80 port [tcp/http] succeeded!\nI0527 00:11:57.790209 898 log.go:172] (0xc000684160) Data frame received for 1\nI0527 00:11:57.790222 898 log.go:172] (0xc000325540) (1) Data frame handling\nI0527 00:11:57.790229 898 log.go:172] (0xc000325540) (1) Data frame sent\nI0527 00:11:57.790238 898 log.go:172] (0xc000684160) (0xc000325540) Stream removed, broadcasting: 1\nI0527 00:11:57.790319 898 log.go:172] (0xc000684160) Go away received\nI0527 00:11:57.790487 898 log.go:172] (0xc000684160) (0xc000325540) Stream removed, broadcasting: 1\nI0527 00:11:57.790502 898 log.go:172] (0xc000684160) (0xc000325b80) Stream removed, broadcasting: 3\nI0527 00:11:57.790508 898 log.go:172] (0xc000684160) (0xc0000dcfa0) Stream removed, broadcasting: 5\n" May 27 00:11:57.796: INFO: stdout: "" May 27 00:11:57.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3072 execpod-affinitydlm9c -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.11.35:80/ ; done' May 27 00:11:58.098: INFO: stderr: "I0527 00:11:57.943008 918 log.go:172] (0xc000aa33f0) (0xc000a0e5a0) Create stream\nI0527 00:11:57.943054 918 log.go:172] (0xc000aa33f0) (0xc000a0e5a0) Stream added, broadcasting: 1\nI0527 00:11:57.947984 918 log.go:172] (0xc000aa33f0) Reply frame received for 1\nI0527 00:11:57.948022 918 log.go:172] (0xc000aa33f0) (0xc000526640) Create stream\nI0527 00:11:57.948031 918 log.go:172] (0xc000aa33f0) (0xc000526640) Stream added, broadcasting: 3\nI0527 00:11:57.948911 918 log.go:172] (0xc000aa33f0) Reply frame received for 3\nI0527 00:11:57.948951 918 log.go:172] (0xc000aa33f0) (0xc000456e60) Create stream\nI0527 00:11:57.948963 918 log.go:172] (0xc000aa33f0) (0xc000456e60) Stream added, broadcasting: 5\nI0527 00:11:57.950038 918 log.go:172] (0xc000aa33f0) Reply frame received for 5\nI0527 00:11:58.000140 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.000179 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.000195 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.000217 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.000227 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.000239 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.007895 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.007927 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.007948 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.008528 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.008551 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.008565 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.008582 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.008592 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.008601 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.012513 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.012526 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.012533 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.013073 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.013096 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.013334 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.013362 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.013382 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.013402 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.016492 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.016506 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.016516 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.016781 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.016791 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.016797 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.016805 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.016810 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.016816 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.024088 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.024099 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.024106 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.024998 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.025008 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.025014 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.025039 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.025070 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.025107 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.032471 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.032486 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.032495 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.033020 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.033033 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.033042 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.033059 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.033081 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.033098 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.037472 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.037488 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.037498 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.038002 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.038041 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.038060 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.038087 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.038102 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.038139 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.042028 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.042066 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.042109 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.042548 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.042596 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.042626 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.042658 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.042683 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.042705 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.048971 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.048987 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.048995 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.049837 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.049865 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.049877 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.049892 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.049908 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.049921 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.053580 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.053600 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.053617 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.054191 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.054210 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.054239 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.054259 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.054272 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.054310 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.057717 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.057756 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.057781 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.057882 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.057917 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.057940 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.057969 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.058001 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.058025 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.066162 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.066182 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.066197 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.066926 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.066951 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.066970 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.067014 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.067061 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.067106 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.070564 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.070581 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.070589 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.071519 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.071549 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.071591 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.071630 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.071649 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.071669 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.075760 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.075781 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.075798 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.076262 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.076319 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.076343 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.076366 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.076376 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.076401 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.080077 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.080096 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.080122 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.080489 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.080515 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.080528 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.080547 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.080565 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.080577 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.085319 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.085343 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.085354 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.085778 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.085798 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.085809 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.085848 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.085865 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.085877 918 log.go:172] (0xc000456e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.089702 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.089723 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.089735 918 log.go:172] (0xc000526640) (3) Data frame sent\nI0527 00:11:58.090168 918 log.go:172] (0xc000aa33f0) Data frame received for 3\nI0527 00:11:58.090189 918 log.go:172] (0xc000526640) (3) Data frame handling\nI0527 00:11:58.090286 918 log.go:172] (0xc000aa33f0) Data frame received for 5\nI0527 00:11:58.090312 918 log.go:172] (0xc000456e60) (5) Data frame handling\nI0527 00:11:58.091945 918 log.go:172] (0xc000aa33f0) Data frame received for 1\nI0527 00:11:58.091975 918 log.go:172] (0xc000a0e5a0) (1) Data frame handling\nI0527 00:11:58.091996 918 log.go:172] (0xc000a0e5a0) (1) Data frame sent\nI0527 00:11:58.092027 918 log.go:172] (0xc000aa33f0) (0xc000a0e5a0) Stream removed, broadcasting: 1\nI0527 00:11:58.092039 918 log.go:172] (0xc000aa33f0) Go away received\nI0527 00:11:58.092436 918 log.go:172] (0xc000aa33f0) (0xc000a0e5a0) Stream removed, broadcasting: 1\nI0527 00:11:58.092466 918 log.go:172] (0xc000aa33f0) (0xc000526640) Stream removed, broadcasting: 3\nI0527 00:11:58.092487 918 log.go:172] (0xc000aa33f0) (0xc000456e60) Stream removed, broadcasting: 5\n" May 27 00:11:58.098: INFO: stdout: "\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm\naffinity-clusterip-timeout-7nnwm" May 27 00:11:58.098: INFO: Received response from host: May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Received response from host: affinity-clusterip-timeout-7nnwm May 27 00:11:58.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3072 execpod-affinitydlm9c -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.11.35:80/' May 27 00:11:58.295: INFO: stderr: "I0527 00:11:58.223143 938 log.go:172] (0xc000c653f0) (0xc000ab8780) Create stream\nI0527 00:11:58.223190 938 log.go:172] (0xc000c653f0) (0xc000ab8780) Stream added, broadcasting: 1\nI0527 00:11:58.225802 938 log.go:172] (0xc000c653f0) Reply frame received for 1\nI0527 00:11:58.225830 938 log.go:172] (0xc000c653f0) (0xc000af81e0) Create stream\nI0527 00:11:58.225838 938 log.go:172] (0xc000c653f0) (0xc000af81e0) Stream added, broadcasting: 3\nI0527 00:11:58.226684 938 log.go:172] (0xc000c653f0) Reply frame received for 3\nI0527 00:11:58.226723 938 log.go:172] (0xc000c653f0) (0xc000ab8820) Create stream\nI0527 00:11:58.226735 938 log.go:172] (0xc000c653f0) (0xc000ab8820) Stream added, broadcasting: 5\nI0527 00:11:58.227553 938 log.go:172] (0xc000c653f0) Reply frame received for 5\nI0527 00:11:58.286471 938 log.go:172] (0xc000c653f0) Data frame received for 5\nI0527 00:11:58.286496 938 log.go:172] (0xc000ab8820) (5) Data frame handling\nI0527 00:11:58.286511 938 log.go:172] (0xc000ab8820) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:11:58.288699 938 log.go:172] (0xc000c653f0) Data frame received for 3\nI0527 00:11:58.288717 938 log.go:172] (0xc000af81e0) (3) Data frame handling\nI0527 00:11:58.288727 938 log.go:172] (0xc000af81e0) (3) Data frame sent\nI0527 00:11:58.289243 938 log.go:172] (0xc000c653f0) Data frame received for 3\nI0527 00:11:58.289261 938 log.go:172] (0xc000af81e0) (3) Data frame handling\nI0527 00:11:58.289278 938 log.go:172] (0xc000c653f0) Data frame received for 5\nI0527 00:11:58.289285 938 log.go:172] (0xc000ab8820) (5) Data frame handling\nI0527 00:11:58.290726 938 log.go:172] (0xc000c653f0) Data frame received for 1\nI0527 00:11:58.290743 938 log.go:172] (0xc000ab8780) (1) Data frame handling\nI0527 00:11:58.290752 938 log.go:172] (0xc000ab8780) (1) Data frame sent\nI0527 00:11:58.290762 938 log.go:172] (0xc000c653f0) (0xc000ab8780) Stream removed, broadcasting: 1\nI0527 00:11:58.290781 938 log.go:172] (0xc000c653f0) Go away received\nI0527 00:11:58.291278 938 log.go:172] (0xc000c653f0) (0xc000ab8780) Stream removed, broadcasting: 1\nI0527 00:11:58.291301 938 log.go:172] (0xc000c653f0) (0xc000af81e0) Stream removed, broadcasting: 3\nI0527 00:11:58.291315 938 log.go:172] (0xc000c653f0) (0xc000ab8820) Stream removed, broadcasting: 5\n" May 27 00:11:58.296: INFO: stdout: "affinity-clusterip-timeout-7nnwm" May 27 00:12:13.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3072 execpod-affinitydlm9c -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.11.35:80/' May 27 00:12:13.528: INFO: stderr: "I0527 00:12:13.423198 958 log.go:172] (0xc00003b8c0) (0xc000c146e0) Create stream\nI0527 00:12:13.423253 958 log.go:172] (0xc00003b8c0) (0xc000c146e0) Stream added, broadcasting: 1\nI0527 00:12:13.426986 958 log.go:172] (0xc00003b8c0) Reply frame received for 1\nI0527 00:12:13.427054 958 log.go:172] (0xc00003b8c0) (0xc0006ca460) Create stream\nI0527 00:12:13.427074 958 log.go:172] (0xc00003b8c0) (0xc0006ca460) Stream added, broadcasting: 3\nI0527 00:12:13.428045 958 log.go:172] (0xc00003b8c0) Reply frame received for 3\nI0527 00:12:13.428084 958 log.go:172] (0xc00003b8c0) (0xc0006cad20) Create stream\nI0527 00:12:13.428104 958 log.go:172] (0xc00003b8c0) (0xc0006cad20) Stream added, broadcasting: 5\nI0527 00:12:13.429302 958 log.go:172] (0xc00003b8c0) Reply frame received for 5\nI0527 00:12:13.514530 958 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0527 00:12:13.514563 958 log.go:172] (0xc0006cad20) (5) Data frame handling\nI0527 00:12:13.514586 958 log.go:172] (0xc0006cad20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.11.35:80/\nI0527 00:12:13.519714 958 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0527 00:12:13.519752 958 log.go:172] (0xc0006ca460) (3) Data frame handling\nI0527 00:12:13.519790 958 log.go:172] (0xc0006ca460) (3) Data frame sent\nI0527 00:12:13.520813 958 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0527 00:12:13.520857 958 log.go:172] (0xc0006cad20) (5) Data frame handling\nI0527 00:12:13.520911 958 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0527 00:12:13.520935 958 log.go:172] (0xc0006ca460) (3) Data frame handling\nI0527 00:12:13.522749 958 log.go:172] (0xc00003b8c0) Data frame received for 1\nI0527 00:12:13.522777 958 log.go:172] (0xc000c146e0) (1) Data frame handling\nI0527 00:12:13.522793 958 log.go:172] (0xc000c146e0) (1) Data frame sent\nI0527 00:12:13.522805 958 log.go:172] (0xc00003b8c0) (0xc000c146e0) Stream removed, broadcasting: 1\nI0527 00:12:13.522837 958 log.go:172] (0xc00003b8c0) Go away received\nI0527 00:12:13.523142 958 log.go:172] (0xc00003b8c0) (0xc000c146e0) Stream removed, broadcasting: 1\nI0527 00:12:13.523160 958 log.go:172] (0xc00003b8c0) (0xc0006ca460) Stream removed, broadcasting: 3\nI0527 00:12:13.523170 958 log.go:172] (0xc00003b8c0) (0xc0006cad20) Stream removed, broadcasting: 5\n" May 27 00:12:13.528: INFO: stdout: "affinity-clusterip-timeout-mbq69" May 27 00:12:13.528: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3072, will wait for the garbage collector to delete the pods May 27 00:12:13.623: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.192876ms May 27 00:12:14.223: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.220071ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:12:25.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3072" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:57.647 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":108,"skipped":2007,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:12:25.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 27 00:12:25.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1753' May 27 00:12:25.833: INFO: stderr: "" May 27 00:12:25.833: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 27 00:12:25.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1753' May 27 00:12:25.983: INFO: stderr: "" May 27 00:12:25.983: INFO: stdout: "update-demo-nautilus-m52sh update-demo-nautilus-pdq6x " May 27 00:12:25.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m52sh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:26.097: INFO: stderr: "" May 27 00:12:26.097: INFO: stdout: "" May 27 00:12:26.097: INFO: update-demo-nautilus-m52sh is created but not running May 27 00:12:31.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1753' May 27 00:12:31.220: INFO: stderr: "" May 27 00:12:31.220: INFO: stdout: "update-demo-nautilus-m52sh update-demo-nautilus-pdq6x " May 27 00:12:31.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m52sh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:31.329: INFO: stderr: "" May 27 00:12:31.329: INFO: stdout: "true" May 27 00:12:31.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m52sh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:31.416: INFO: stderr: "" May 27 00:12:31.416: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 27 00:12:31.416: INFO: validating pod update-demo-nautilus-m52sh May 27 00:12:31.431: INFO: got data: { "image": "nautilus.jpg" } May 27 00:12:31.432: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 27 00:12:31.432: INFO: update-demo-nautilus-m52sh is verified up and running May 27 00:12:31.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pdq6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:31.533: INFO: stderr: "" May 27 00:12:31.533: INFO: stdout: "true" May 27 00:12:31.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pdq6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:31.653: INFO: stderr: "" May 27 00:12:31.653: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 27 00:12:31.653: INFO: validating pod update-demo-nautilus-pdq6x May 27 00:12:31.669: INFO: got data: { "image": "nautilus.jpg" } May 27 00:12:31.669: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 27 00:12:31.669: INFO: update-demo-nautilus-pdq6x is verified up and running STEP: scaling down the replication controller May 27 00:12:31.671: INFO: scanned /root for discovery docs: May 27 00:12:31.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1753' May 27 00:12:32.898: INFO: stderr: "" May 27 00:12:32.898: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 27 00:12:32.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1753' May 27 00:12:33.003: INFO: stderr: "" May 27 00:12:33.003: INFO: stdout: "update-demo-nautilus-m52sh update-demo-nautilus-pdq6x " STEP: Replicas for name=update-demo: expected=1 actual=2 May 27 00:12:38.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1753' May 27 00:12:38.112: INFO: stderr: "" May 27 00:12:38.112: INFO: stdout: "update-demo-nautilus-m52sh update-demo-nautilus-pdq6x " STEP: Replicas for name=update-demo: expected=1 actual=2 May 27 00:12:43.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1753' May 27 00:12:43.225: INFO: stderr: "" May 27 00:12:43.225: INFO: stdout: "update-demo-nautilus-m52sh update-demo-nautilus-pdq6x " STEP: Replicas for name=update-demo: expected=1 actual=2 May 27 00:12:48.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1753' May 27 00:12:48.329: INFO: stderr: "" May 27 00:12:48.329: INFO: stdout: "update-demo-nautilus-pdq6x " May 27 00:12:48.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pdq6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:48.421: INFO: stderr: "" May 27 00:12:48.421: INFO: stdout: "true" May 27 00:12:48.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pdq6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:48.518: INFO: stderr: "" May 27 00:12:48.519: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 27 00:12:48.519: INFO: validating pod update-demo-nautilus-pdq6x May 27 00:12:48.522: INFO: got data: { "image": "nautilus.jpg" } May 27 00:12:48.522: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 27 00:12:48.523: INFO: update-demo-nautilus-pdq6x is verified up and running STEP: scaling up the replication controller May 27 00:12:48.525: INFO: scanned /root for discovery docs: May 27 00:12:48.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1753' May 27 00:12:49.707: INFO: stderr: "" May 27 00:12:49.707: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 27 00:12:49.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1753' May 27 00:12:49.804: INFO: stderr: "" May 27 00:12:49.804: INFO: stdout: "update-demo-nautilus-kxb99 update-demo-nautilus-pdq6x " May 27 00:12:49.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxb99 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:49.928: INFO: stderr: "" May 27 00:12:49.928: INFO: stdout: "" May 27 00:12:49.928: INFO: update-demo-nautilus-kxb99 is created but not running May 27 00:12:54.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1753' May 27 00:12:55.035: INFO: stderr: "" May 27 00:12:55.035: INFO: stdout: "update-demo-nautilus-kxb99 update-demo-nautilus-pdq6x " May 27 00:12:55.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxb99 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:55.136: INFO: stderr: "" May 27 00:12:55.136: INFO: stdout: "true" May 27 00:12:55.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxb99 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:55.234: INFO: stderr: "" May 27 00:12:55.234: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 27 00:12:55.234: INFO: validating pod update-demo-nautilus-kxb99 May 27 00:12:55.238: INFO: got data: { "image": "nautilus.jpg" } May 27 00:12:55.238: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 27 00:12:55.238: INFO: update-demo-nautilus-kxb99 is verified up and running May 27 00:12:55.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pdq6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:55.335: INFO: stderr: "" May 27 00:12:55.335: INFO: stdout: "true" May 27 00:12:55.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pdq6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1753' May 27 00:12:55.431: INFO: stderr: "" May 27 00:12:55.431: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 27 00:12:55.431: INFO: validating pod update-demo-nautilus-pdq6x May 27 00:12:55.434: INFO: got data: { "image": "nautilus.jpg" } May 27 00:12:55.434: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 27 00:12:55.434: INFO: update-demo-nautilus-pdq6x is verified up and running STEP: using delete to clean up resources May 27 00:12:55.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1753' May 27 00:12:55.550: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 27 00:12:55.550: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 27 00:12:55.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1753' May 27 00:12:55.650: INFO: stderr: "No resources found in kubectl-1753 namespace.\n" May 27 00:12:55.650: INFO: stdout: "" May 27 00:12:55.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1753 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 27 00:12:55.755: INFO: stderr: "" May 27 00:12:55.755: INFO: stdout: "update-demo-nautilus-kxb99\nupdate-demo-nautilus-pdq6x\n" May 27 00:12:56.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1753' May 27 00:12:56.353: INFO: stderr: "No resources found in kubectl-1753 namespace.\n" May 27 00:12:56.353: INFO: stdout: "" May 27 00:12:56.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1753 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 27 00:12:56.454: INFO: stderr: "" May 27 00:12:56.454: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:12:56.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1753" for this suite. • [SLOW TEST:31.076 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":109,"skipped":2020,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:12:56.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:14:56.931: INFO: Deleting pod "var-expansion-8b45e329-487b-4aa2-8055-eb8bae836d7c" in namespace "var-expansion-9601" May 27 00:14:56.935: INFO: Wait up to 5m0s for pod "var-expansion-8b45e329-487b-4aa2-8055-eb8bae836d7c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:15:00.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9601" for this suite. • [SLOW TEST:124.516 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":110,"skipped":2028,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:15:00.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:15:18.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2061" for this suite. • [SLOW TEST:17.062 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":111,"skipped":2028,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:15:18.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:15:22.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-511" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":112,"skipped":2028,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:15:22.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2308.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2308.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2308.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2308.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2308.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2308.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 27 00:15:28.496: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:28.499: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:28.504: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:28.507: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:28.516: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:28.519: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:28.521: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:28.524: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:28.530: INFO: Lookups using dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local] May 27 00:15:33.535: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:33.539: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:33.542: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:33.546: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:33.557: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:33.560: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:33.563: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:33.566: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:33.574: INFO: Lookups using dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local] May 27 00:15:38.535: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:38.540: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:38.544: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:38.548: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:38.559: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:38.562: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:38.565: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:38.569: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:38.576: INFO: Lookups using dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local] May 27 00:15:43.546: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:43.550: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:43.553: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:43.556: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:43.565: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:43.568: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:43.571: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:43.574: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:43.580: INFO: Lookups using dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local] May 27 00:15:48.535: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:48.540: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:48.543: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:48.546: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:48.556: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:48.560: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:48.563: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:48.566: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:48.573: INFO: Lookups using dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local] May 27 00:15:53.541: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:53.545: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:53.548: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:53.551: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:53.561: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:53.565: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:53.568: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:53.572: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local from pod dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d: the server could not find the requested resource (get pods dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d) May 27 00:15:53.579: INFO: Lookups using dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2308.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2308.svc.cluster.local jessie_udp@dns-test-service-2.dns-2308.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2308.svc.cluster.local] May 27 00:15:58.574: INFO: DNS probes using dns-2308/dns-test-4afcbec2-56fc-4658-91a1-81df15dc918d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:15:59.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2308" for this suite. • [SLOW TEST:37.086 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":113,"skipped":2037,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:15:59.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:15:59.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7120" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":114,"skipped":2084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:15:59.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5530.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5530.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 27 00:16:05.611: INFO: DNS probes using dns-test-f288870a-134c-4908-a8fe-73a8fa6d2e5a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5530.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5530.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 27 00:16:13.745: INFO: File wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:13.748: INFO: File jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:13.748: INFO: Lookups using dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 failed for: [wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local] May 27 00:16:18.754: INFO: File wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:18.759: INFO: File jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:18.759: INFO: Lookups using dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 failed for: [wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local] May 27 00:16:23.754: INFO: File wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:23.759: INFO: File jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:23.759: INFO: Lookups using dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 failed for: [wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local] May 27 00:16:28.754: INFO: File wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:28.758: INFO: File jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:28.758: INFO: Lookups using dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 failed for: [wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local] May 27 00:16:33.755: INFO: File wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:33.759: INFO: File jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local from pod dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 contains 'foo.example.com. ' instead of 'bar.example.com.' May 27 00:16:33.759: INFO: Lookups using dns-5530/dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 failed for: [wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local] May 27 00:16:38.758: INFO: DNS probes using dns-test-07f81a73-ec59-4002-81e5-58e73a443b44 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5530.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5530.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5530.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5530.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 27 00:16:47.617: INFO: DNS probes using dns-test-0d49686a-2c8a-4bfb-8766-1691977bed30 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:16:48.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5530" for this suite. • [SLOW TEST:48.847 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":115,"skipped":2120,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:16:48.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:16:48.820: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:16:50.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135408, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135408, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135408, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135408, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:16:52.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135408, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135408, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135408, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135408, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:16:55.872: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 27 00:16:59.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-8983 to-be-attached-pod -i -c=container1' May 27 00:17:00.104: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:00.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8983" for this suite. STEP: Destroying namespace "webhook-8983-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.084 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":116,"skipped":2120,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:00.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 27 00:17:00.401: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:00.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6935" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":117,"skipped":2138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:00.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:17:01.682: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:17:03.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135421, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135421, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135421, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:17:06.827: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:08.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9893" for this suite. STEP: Destroying namespace "webhook-9893-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.560 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":118,"skipped":2188,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:08.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2045.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2045.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2045.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2045.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2045.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2045.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 27 00:17:14.708: INFO: DNS probes using dns-2045/dns-test-9d200749-a355-4fe5-8c5b-a51f543e5ea1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:14.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2045" for this suite. • [SLOW TEST:6.707 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":119,"skipped":2195,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:14.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-8206d05f-bfbf-4cd1-913a-affe24a51c50 STEP: Creating a pod to test consume configMaps May 27 00:17:15.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b" in namespace "configmap-4045" to be "Succeeded or Failed" May 27 00:17:15.319: INFO: Pod "pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.891352ms May 27 00:17:17.323: INFO: Pod "pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0622059s May 27 00:17:19.336: INFO: Pod "pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b": Phase="Running", Reason="", readiness=true. Elapsed: 4.075301731s May 27 00:17:21.341: INFO: Pod "pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080293932s STEP: Saw pod success May 27 00:17:21.341: INFO: Pod "pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b" satisfied condition "Succeeded or Failed" May 27 00:17:21.345: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b container configmap-volume-test: STEP: delete the pod May 27 00:17:21.380: INFO: Waiting for pod pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b to disappear May 27 00:17:21.385: INFO: Pod pod-configmaps-286333a5-ccb6-4f59-9dd6-a43b3318bf8b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:21.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4045" for this suite. • [SLOW TEST:6.553 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":2196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:21.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:32.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4038" for this suite. • [SLOW TEST:11.279 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":121,"skipped":2223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:32.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 27 00:17:32.907: INFO: Waiting up to 5m0s for pod "downward-api-15fea7a0-8e2c-4cb2-ac13-54b5865e9e00" in namespace "downward-api-5473" to be "Succeeded or Failed" May 27 00:17:32.952: INFO: Pod "downward-api-15fea7a0-8e2c-4cb2-ac13-54b5865e9e00": Phase="Pending", Reason="", readiness=false. Elapsed: 45.015533ms May 27 00:17:35.016: INFO: Pod "downward-api-15fea7a0-8e2c-4cb2-ac13-54b5865e9e00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109029825s May 27 00:17:37.176: INFO: Pod "downward-api-15fea7a0-8e2c-4cb2-ac13-54b5865e9e00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269317636s STEP: Saw pod success May 27 00:17:37.176: INFO: Pod "downward-api-15fea7a0-8e2c-4cb2-ac13-54b5865e9e00" satisfied condition "Succeeded or Failed" May 27 00:17:37.179: INFO: Trying to get logs from node latest-worker2 pod downward-api-15fea7a0-8e2c-4cb2-ac13-54b5865e9e00 container dapi-container: STEP: delete the pod May 27 00:17:37.518: INFO: Waiting for pod downward-api-15fea7a0-8e2c-4cb2-ac13-54b5865e9e00 to disappear May 27 00:17:37.523: INFO: Pod downward-api-15fea7a0-8e2c-4cb2-ac13-54b5865e9e00 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:37.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5473" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":122,"skipped":2272,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:37.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:17:37.640: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:38.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8008" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":123,"skipped":2272,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:38.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 27 00:17:42.491: INFO: Pod pod-hostip-6da99578-3c69-4046-a1b6-d5b97e2c83a4 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:17:42.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4404" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":124,"skipped":2280,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:17:42.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:17:42.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 27 00:17:43.217: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-27T00:17:43Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-27T00:17:43Z]] name:name1 resourceVersion:7947851 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2902f0f5-59fb-47e6-8b15-9972fea1adbf] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 27 00:17:53.225: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-27T00:17:53Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-27T00:17:53Z]] name:name2 resourceVersion:7947895 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bfeb056b-fc33-4580-b54c-ff5fa41528d5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 27 00:18:03.232: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-27T00:17:43Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-27T00:18:03Z]] name:name1 resourceVersion:7947925 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2902f0f5-59fb-47e6-8b15-9972fea1adbf] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 27 00:18:13.241: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-27T00:17:53Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-27T00:18:13Z]] name:name2 resourceVersion:7947957 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bfeb056b-fc33-4580-b54c-ff5fa41528d5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 27 00:18:23.251: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-27T00:17:43Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-27T00:18:03Z]] name:name1 resourceVersion:7947987 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2902f0f5-59fb-47e6-8b15-9972fea1adbf] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 27 00:18:33.261: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-27T00:17:53Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-27T00:18:13Z]] name:name2 resourceVersion:7948017 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bfeb056b-fc33-4580-b54c-ff5fa41528d5] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:18:43.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1188" for this suite. • [SLOW TEST:61.282 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":125,"skipped":2283,"failed":0} SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:18:43.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-ad4d205f-1774-4b44-ae21-f5ffb06e2000 STEP: Creating configMap with name cm-test-opt-upd-cc310199-b0a6-4acf-a24d-383ec13d4012 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ad4d205f-1774-4b44-ae21-f5ffb06e2000 STEP: Updating configmap cm-test-opt-upd-cc310199-b0a6-4acf-a24d-383ec13d4012 STEP: Creating configMap with name cm-test-opt-create-a6f99c73-fb59-424f-b5e1-b865e14aa2c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:20:03.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7673" for this suite. • [SLOW TEST:79.230 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":126,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:20:03.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:20:03.168: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-169be1c7-a10e-4267-a1ec-920599ea5729" in namespace "security-context-test-8713" to be "Succeeded or Failed" May 27 00:20:03.174: INFO: Pod "busybox-privileged-false-169be1c7-a10e-4267-a1ec-920599ea5729": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147114ms May 27 00:20:05.198: INFO: Pod "busybox-privileged-false-169be1c7-a10e-4267-a1ec-920599ea5729": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030352841s May 27 00:20:07.335: INFO: Pod "busybox-privileged-false-169be1c7-a10e-4267-a1ec-920599ea5729": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.167549372s May 27 00:20:07.336: INFO: Pod "busybox-privileged-false-169be1c7-a10e-4267-a1ec-920599ea5729" satisfied condition "Succeeded or Failed" May 27 00:20:07.356: INFO: Got logs for pod "busybox-privileged-false-169be1c7-a10e-4267-a1ec-920599ea5729": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:20:07.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8713" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":2320,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:20:07.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 27 00:20:07.459: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 27 00:20:18.121: INFO: >>> kubeConfig: /root/.kube/config May 27 00:20:20.092: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:20:31.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7374" for this suite. • [SLOW TEST:24.336 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":128,"skipped":2327,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:20:31.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-1588aa49-364d-4eae-a1ed-231addf7c24f STEP: Creating a pod to test consume configMaps May 27 00:20:31.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-d220d094-bf0c-4a77-97dc-c4d639190684" in namespace "configmap-4124" to be "Succeeded or Failed" May 27 00:20:31.831: INFO: Pod "pod-configmaps-d220d094-bf0c-4a77-97dc-c4d639190684": Phase="Pending", Reason="", readiness=false. Elapsed: 19.83347ms May 27 00:20:33.835: INFO: Pod "pod-configmaps-d220d094-bf0c-4a77-97dc-c4d639190684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023610438s May 27 00:20:35.840: INFO: Pod "pod-configmaps-d220d094-bf0c-4a77-97dc-c4d639190684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02801729s STEP: Saw pod success May 27 00:20:35.840: INFO: Pod "pod-configmaps-d220d094-bf0c-4a77-97dc-c4d639190684" satisfied condition "Succeeded or Failed" May 27 00:20:35.843: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d220d094-bf0c-4a77-97dc-c4d639190684 container configmap-volume-test: STEP: delete the pod May 27 00:20:35.911: INFO: Waiting for pod pod-configmaps-d220d094-bf0c-4a77-97dc-c4d639190684 to disappear May 27 00:20:35.966: INFO: Pod pod-configmaps-d220d094-bf0c-4a77-97dc-c4d639190684 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:20:35.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4124" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2331,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:20:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 27 00:20:44.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 27 00:20:44.204: INFO: Pod pod-with-prestop-exec-hook still exists May 27 00:20:46.204: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 27 00:20:46.210: INFO: Pod pod-with-prestop-exec-hook still exists May 27 00:20:48.204: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 27 00:20:48.209: INFO: Pod pod-with-prestop-exec-hook still exists May 27 00:20:50.204: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 27 00:20:50.210: INFO: Pod pod-with-prestop-exec-hook still exists May 27 00:20:52.204: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 27 00:20:52.209: INFO: Pod pod-with-prestop-exec-hook still exists May 27 00:20:54.204: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 27 00:20:54.209: INFO: Pod pod-with-prestop-exec-hook still exists May 27 00:20:56.204: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 27 00:20:56.210: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:20:56.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2215" for this suite. • [SLOW TEST:20.158 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2339,"failed":0} [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:20:56.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 27 00:20:56.307: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:21:02.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4050" for this suite. • [SLOW TEST:6.854 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":131,"skipped":2339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:21:03.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 27 00:21:03.463: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1081 /api/v1/namespaces/watch-1081/configmaps/e2e-watch-test-label-changed ba4fafe5-ea60-4af7-b846-52fe4d76e267 7948642 0 2020-05-27 00:21:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:21:03.475: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1081 /api/v1/namespaces/watch-1081/configmaps/e2e-watch-test-label-changed ba4fafe5-ea60-4af7-b846-52fe4d76e267 7948644 0 2020-05-27 00:21:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:21:03.475: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1081 /api/v1/namespaces/watch-1081/configmaps/e2e-watch-test-label-changed ba4fafe5-ea60-4af7-b846-52fe4d76e267 7948645 0 2020-05-27 00:21:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 27 00:21:13.528: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1081 /api/v1/namespaces/watch-1081/configmaps/e2e-watch-test-label-changed ba4fafe5-ea60-4af7-b846-52fe4d76e267 7948687 0 2020-05-27 00:21:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:21:13.528: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1081 /api/v1/namespaces/watch-1081/configmaps/e2e-watch-test-label-changed ba4fafe5-ea60-4af7-b846-52fe4d76e267 7948688 0 2020-05-27 00:21:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:21:13.528: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1081 /api/v1/namespaces/watch-1081/configmaps/e2e-watch-test-label-changed ba4fafe5-ea60-4af7-b846-52fe4d76e267 7948689 0 2020-05-27 00:21:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:21:13.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1081" for this suite. • [SLOW TEST:10.477 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":132,"skipped":2369,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:21:13.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:21:13.650: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:21:20.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7953" for this suite. • [SLOW TEST:7.109 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":133,"skipped":2369,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:21:20.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 27 00:21:20.760: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5350 /api/v1/namespaces/watch-5350/configmaps/e2e-watch-test-watch-closed 382c498f-5205-43f6-83af-0e25f673afc5 7948775 0 2020-05-27 00:21:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:21:20.760: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5350 /api/v1/namespaces/watch-5350/configmaps/e2e-watch-test-watch-closed 382c498f-5205-43f6-83af-0e25f673afc5 7948776 0 2020-05-27 00:21:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 27 00:21:20.819: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5350 /api/v1/namespaces/watch-5350/configmaps/e2e-watch-test-watch-closed 382c498f-5205-43f6-83af-0e25f673afc5 7948777 0 2020-05-27 00:21:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:21:20.819: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5350 /api/v1/namespaces/watch-5350/configmaps/e2e-watch-test-watch-closed 382c498f-5205-43f6-83af-0e25f673afc5 7948778 0 2020-05-27 00:21:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-27 00:21:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:21:20.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5350" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":134,"skipped":2374,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:21:20.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:21:20.879: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de87717b-b301-43d4-9f02-07418d8897f5" in namespace "projected-1355" to be "Succeeded or Failed" May 27 00:21:20.883: INFO: Pod "downwardapi-volume-de87717b-b301-43d4-9f02-07418d8897f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.745051ms May 27 00:21:22.887: INFO: Pod "downwardapi-volume-de87717b-b301-43d4-9f02-07418d8897f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008374672s May 27 00:21:24.891: INFO: Pod "downwardapi-volume-de87717b-b301-43d4-9f02-07418d8897f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0122266s STEP: Saw pod success May 27 00:21:24.891: INFO: Pod "downwardapi-volume-de87717b-b301-43d4-9f02-07418d8897f5" satisfied condition "Succeeded or Failed" May 27 00:21:24.894: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-de87717b-b301-43d4-9f02-07418d8897f5 container client-container: STEP: delete the pod May 27 00:21:25.048: INFO: Waiting for pod downwardapi-volume-de87717b-b301-43d4-9f02-07418d8897f5 to disappear May 27 00:21:25.062: INFO: Pod downwardapi-volume-de87717b-b301-43d4-9f02-07418d8897f5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:21:25.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1355" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2387,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:21:25.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-fe2f6b59-07e9-4a3b-ab1e-e2c103c6e156 in namespace container-probe-3655 May 27 00:21:29.199: INFO: Started pod busybox-fe2f6b59-07e9-4a3b-ab1e-e2c103c6e156 in namespace container-probe-3655 STEP: checking the pod's current state and verifying that restartCount is present May 27 00:21:29.202: INFO: Initial restart count of pod busybox-fe2f6b59-07e9-4a3b-ab1e-e2c103c6e156 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:25:30.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3655" for this suite. • [SLOW TEST:245.086 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2402,"failed":0} [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:25:30.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 27 00:25:30.298: INFO: Waiting up to 5m0s for pod "pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee" in namespace "emptydir-9983" to be "Succeeded or Failed" May 27 00:25:30.302: INFO: Pod "pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456069ms May 27 00:25:32.473: INFO: Pod "pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175050701s May 27 00:25:34.477: INFO: Pod "pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee": Phase="Running", Reason="", readiness=true. Elapsed: 4.179175438s May 27 00:25:36.482: INFO: Pod "pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183984267s STEP: Saw pod success May 27 00:25:36.482: INFO: Pod "pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee" satisfied condition "Succeeded or Failed" May 27 00:25:36.485: INFO: Trying to get logs from node latest-worker pod pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee container test-container: STEP: delete the pod May 27 00:25:36.540: INFO: Waiting for pod pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee to disappear May 27 00:25:36.611: INFO: Pod pod-02fbe62b-187c-4fbe-8a6f-4e0f92338dee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:25:36.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9983" for this suite. • [SLOW TEST:6.483 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:25:36.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 27 00:25:40.840: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:25:40.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-587" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2434,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:25:40.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-kvmb STEP: Creating a pod to test atomic-volume-subpath May 27 00:25:41.060: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-kvmb" in namespace "subpath-1910" to be "Succeeded or Failed" May 27 00:25:41.082: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.31836ms May 27 00:25:43.100: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039829679s May 27 00:25:45.117: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 4.057491874s May 27 00:25:47.123: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 6.063700108s May 27 00:25:49.155: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 8.095514625s May 27 00:25:51.160: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 10.1001804s May 27 00:25:53.164: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 12.104647766s May 27 00:25:55.169: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 14.108788881s May 27 00:25:57.187: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 16.127553585s May 27 00:25:59.193: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 18.133548971s May 27 00:26:01.197: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 20.137717475s May 27 00:26:03.202: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Running", Reason="", readiness=true. Elapsed: 22.142250988s May 27 00:26:05.208: INFO: Pod "pod-subpath-test-downwardapi-kvmb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.148517204s STEP: Saw pod success May 27 00:26:05.208: INFO: Pod "pod-subpath-test-downwardapi-kvmb" satisfied condition "Succeeded or Failed" May 27 00:26:05.218: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-kvmb container test-container-subpath-downwardapi-kvmb: STEP: delete the pod May 27 00:26:05.285: INFO: Waiting for pod pod-subpath-test-downwardapi-kvmb to disappear May 27 00:26:05.383: INFO: Pod pod-subpath-test-downwardapi-kvmb no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-kvmb May 27 00:26:05.383: INFO: Deleting pod "pod-subpath-test-downwardapi-kvmb" in namespace "subpath-1910" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:26:05.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1910" for this suite. • [SLOW TEST:24.483 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":139,"skipped":2456,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:26:05.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-83528d1d-b743-43fb-a92f-21924e97f3af STEP: Creating a pod to test consume secrets May 27 00:26:05.522: INFO: Waiting up to 5m0s for pod "pod-secrets-304193df-4457-4c3a-9e2a-78861bbe4909" in namespace "secrets-9413" to be "Succeeded or Failed" May 27 00:26:05.529: INFO: Pod "pod-secrets-304193df-4457-4c3a-9e2a-78861bbe4909": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459614ms May 27 00:26:07.722: INFO: Pod "pod-secrets-304193df-4457-4c3a-9e2a-78861bbe4909": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200015783s May 27 00:26:09.725: INFO: Pod "pod-secrets-304193df-4457-4c3a-9e2a-78861bbe4909": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.202591569s STEP: Saw pod success May 27 00:26:09.725: INFO: Pod "pod-secrets-304193df-4457-4c3a-9e2a-78861bbe4909" satisfied condition "Succeeded or Failed" May 27 00:26:09.727: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-304193df-4457-4c3a-9e2a-78861bbe4909 container secret-volume-test: STEP: delete the pod May 27 00:26:09.790: INFO: Waiting for pod pod-secrets-304193df-4457-4c3a-9e2a-78861bbe4909 to disappear May 27 00:26:09.804: INFO: Pod pod-secrets-304193df-4457-4c3a-9e2a-78861bbe4909 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:26:09.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9413" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":140,"skipped":2458,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:26:09.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 27 00:26:14.493: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1302 pod-service-account-d2a2870b-653d-45ef-bc59-1f16cba7169e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 27 00:26:17.670: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1302 pod-service-account-d2a2870b-653d-45ef-bc59-1f16cba7169e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 27 00:26:17.870: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1302 pod-service-account-d2a2870b-653d-45ef-bc59-1f16cba7169e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:26:18.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1302" for this suite. • [SLOW TEST:8.289 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":141,"skipped":2462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:26:18.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 27 00:26:18.179: INFO: Waiting up to 5m0s for pod "pod-301e1463-b9c9-46e6-9ff4-ad17737f647e" in namespace "emptydir-5575" to be "Succeeded or Failed" May 27 00:26:18.196: INFO: Pod "pod-301e1463-b9c9-46e6-9ff4-ad17737f647e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.421382ms May 27 00:26:20.252: INFO: Pod "pod-301e1463-b9c9-46e6-9ff4-ad17737f647e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072794289s May 27 00:26:22.256: INFO: Pod "pod-301e1463-b9c9-46e6-9ff4-ad17737f647e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076760644s STEP: Saw pod success May 27 00:26:22.256: INFO: Pod "pod-301e1463-b9c9-46e6-9ff4-ad17737f647e" satisfied condition "Succeeded or Failed" May 27 00:26:22.259: INFO: Trying to get logs from node latest-worker pod pod-301e1463-b9c9-46e6-9ff4-ad17737f647e container test-container: STEP: delete the pod May 27 00:26:22.434: INFO: Waiting for pod pod-301e1463-b9c9-46e6-9ff4-ad17737f647e to disappear May 27 00:26:22.599: INFO: Pod pod-301e1463-b9c9-46e6-9ff4-ad17737f647e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:26:22.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5575" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2518,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:26:22.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:26:23.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:26:25.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135983, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135983, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135984, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135983, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:26:27.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135983, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135983, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135984, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726135983, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:26:30.551: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:26:40.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5447" for this suite. STEP: Destroying namespace "webhook-5447-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.267 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":143,"skipped":2520,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:26:40.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2287 STEP: creating a selector STEP: Creating the service pods in kubernetes May 27 00:26:41.010: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 27 00:26:41.102: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 00:26:43.107: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 00:26:45.106: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:26:47.107: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:26:49.107: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:26:51.107: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:26:53.107: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:26:55.107: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:26:57.106: INFO: The status of Pod netserver-0 is Running (Ready = true) May 27 00:26:57.113: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 27 00:27:03.168: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.145 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2287 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:27:03.168: INFO: >>> kubeConfig: /root/.kube/config I0527 00:27:03.207574 8 log.go:172] (0xc002aaa420) (0xc002b1bae0) Create stream I0527 00:27:03.207626 8 log.go:172] (0xc002aaa420) (0xc002b1bae0) Stream added, broadcasting: 1 I0527 00:27:03.210607 8 log.go:172] (0xc002aaa420) Reply frame received for 1 I0527 00:27:03.210654 8 log.go:172] (0xc002aaa420) (0xc002b1bb80) Create stream I0527 00:27:03.210675 8 log.go:172] (0xc002aaa420) (0xc002b1bb80) Stream added, broadcasting: 3 I0527 00:27:03.211970 8 log.go:172] (0xc002aaa420) Reply frame received for 3 I0527 00:27:03.212024 8 log.go:172] (0xc002aaa420) (0xc001f4a780) Create stream I0527 00:27:03.212063 8 log.go:172] (0xc002aaa420) (0xc001f4a780) Stream added, broadcasting: 5 I0527 00:27:03.213082 8 log.go:172] (0xc002aaa420) Reply frame received for 5 I0527 00:27:04.333525 8 log.go:172] (0xc002aaa420) Data frame received for 5 I0527 00:27:04.333580 8 log.go:172] (0xc001f4a780) (5) Data frame handling I0527 00:27:04.333666 8 log.go:172] (0xc002aaa420) Data frame received for 3 I0527 00:27:04.333711 8 log.go:172] (0xc002b1bb80) (3) Data frame handling I0527 00:27:04.333743 8 log.go:172] (0xc002b1bb80) (3) Data frame sent I0527 00:27:04.333953 8 log.go:172] (0xc002aaa420) Data frame received for 3 I0527 00:27:04.333987 8 log.go:172] (0xc002b1bb80) (3) Data frame handling I0527 00:27:04.336393 8 log.go:172] (0xc002aaa420) Data frame received for 1 I0527 00:27:04.336424 8 log.go:172] (0xc002b1bae0) (1) Data frame handling I0527 00:27:04.336442 8 log.go:172] (0xc002b1bae0) (1) Data frame sent I0527 00:27:04.336530 8 log.go:172] (0xc002aaa420) (0xc002b1bae0) Stream removed, broadcasting: 1 I0527 00:27:04.336575 8 log.go:172] (0xc002aaa420) Go away received I0527 00:27:04.336689 8 log.go:172] (0xc002aaa420) (0xc002b1bae0) Stream removed, broadcasting: 1 I0527 00:27:04.336711 8 log.go:172] (0xc002aaa420) (0xc002b1bb80) Stream removed, broadcasting: 3 I0527 00:27:04.336725 8 log.go:172] (0xc002aaa420) (0xc001f4a780) Stream removed, broadcasting: 5 May 27 00:27:04.336: INFO: Found all expected endpoints: [netserver-0] May 27 00:27:04.340: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.158 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2287 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:27:04.340: INFO: >>> kubeConfig: /root/.kube/config I0527 00:27:04.375115 8 log.go:172] (0xc002aaaa50) (0xc001790000) Create stream I0527 00:27:04.375147 8 log.go:172] (0xc002aaaa50) (0xc001790000) Stream added, broadcasting: 1 I0527 00:27:04.378030 8 log.go:172] (0xc002aaaa50) Reply frame received for 1 I0527 00:27:04.378100 8 log.go:172] (0xc002aaaa50) (0xc001ec8fa0) Create stream I0527 00:27:04.378133 8 log.go:172] (0xc002aaaa50) (0xc001ec8fa0) Stream added, broadcasting: 3 I0527 00:27:04.379155 8 log.go:172] (0xc002aaaa50) Reply frame received for 3 I0527 00:27:04.379206 8 log.go:172] (0xc002aaaa50) (0xc001ec9040) Create stream I0527 00:27:04.379227 8 log.go:172] (0xc002aaaa50) (0xc001ec9040) Stream added, broadcasting: 5 I0527 00:27:04.380082 8 log.go:172] (0xc002aaaa50) Reply frame received for 5 I0527 00:27:05.465830 8 log.go:172] (0xc002aaaa50) Data frame received for 5 I0527 00:27:05.465879 8 log.go:172] (0xc001ec9040) (5) Data frame handling I0527 00:27:05.465913 8 log.go:172] (0xc002aaaa50) Data frame received for 3 I0527 00:27:05.465936 8 log.go:172] (0xc001ec8fa0) (3) Data frame handling I0527 00:27:05.465960 8 log.go:172] (0xc001ec8fa0) (3) Data frame sent I0527 00:27:05.466189 8 log.go:172] (0xc002aaaa50) Data frame received for 3 I0527 00:27:05.466203 8 log.go:172] (0xc001ec8fa0) (3) Data frame handling I0527 00:27:05.467922 8 log.go:172] (0xc002aaaa50) Data frame received for 1 I0527 00:27:05.467946 8 log.go:172] (0xc001790000) (1) Data frame handling I0527 00:27:05.467961 8 log.go:172] (0xc001790000) (1) Data frame sent I0527 00:27:05.467978 8 log.go:172] (0xc002aaaa50) (0xc001790000) Stream removed, broadcasting: 1 I0527 00:27:05.467994 8 log.go:172] (0xc002aaaa50) Go away received I0527 00:27:05.468199 8 log.go:172] (0xc002aaaa50) (0xc001790000) Stream removed, broadcasting: 1 I0527 00:27:05.468234 8 log.go:172] (0xc002aaaa50) (0xc001ec8fa0) Stream removed, broadcasting: 3 I0527 00:27:05.468259 8 log.go:172] (0xc002aaaa50) (0xc001ec9040) Stream removed, broadcasting: 5 May 27 00:27:05.468: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:27:05.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2287" for this suite. • [SLOW TEST:24.592 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":144,"skipped":2538,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:27:05.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:27:05.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9348" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":145,"skipped":2546,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:27:05.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:27:05.698: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 27 00:27:05.740: INFO: Number of nodes with available pods: 0 May 27 00:27:05.740: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 27 00:27:05.794: INFO: Number of nodes with available pods: 0 May 27 00:27:05.794: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:06.799: INFO: Number of nodes with available pods: 0 May 27 00:27:06.799: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:07.798: INFO: Number of nodes with available pods: 0 May 27 00:27:07.798: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:08.842: INFO: Number of nodes with available pods: 0 May 27 00:27:08.842: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:09.799: INFO: Number of nodes with available pods: 1 May 27 00:27:09.799: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 27 00:27:09.844: INFO: Number of nodes with available pods: 1 May 27 00:27:09.844: INFO: Number of running nodes: 0, number of available pods: 1 May 27 00:27:10.848: INFO: Number of nodes with available pods: 0 May 27 00:27:10.848: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 27 00:27:10.890: INFO: Number of nodes with available pods: 0 May 27 00:27:10.890: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:11.905: INFO: Number of nodes with available pods: 0 May 27 00:27:11.905: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:12.953: INFO: Number of nodes with available pods: 0 May 27 00:27:12.953: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:13.894: INFO: Number of nodes with available pods: 0 May 27 00:27:13.894: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:14.894: INFO: Number of nodes with available pods: 0 May 27 00:27:14.894: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:15.894: INFO: Number of nodes with available pods: 0 May 27 00:27:15.895: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:16.895: INFO: Number of nodes with available pods: 0 May 27 00:27:16.895: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:17.894: INFO: Number of nodes with available pods: 0 May 27 00:27:17.894: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:18.911: INFO: Number of nodes with available pods: 0 May 27 00:27:18.911: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:19.895: INFO: Number of nodes with available pods: 0 May 27 00:27:19.895: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:20.895: INFO: Number of nodes with available pods: 0 May 27 00:27:20.895: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:21.895: INFO: Number of nodes with available pods: 0 May 27 00:27:21.895: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:22.895: INFO: Number of nodes with available pods: 0 May 27 00:27:22.895: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:23.894: INFO: Number of nodes with available pods: 0 May 27 00:27:23.894: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:24.895: INFO: Number of nodes with available pods: 0 May 27 00:27:24.895: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:25.895: INFO: Number of nodes with available pods: 0 May 27 00:27:25.895: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:26.967: INFO: Number of nodes with available pods: 0 May 27 00:27:26.967: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:27.894: INFO: Number of nodes with available pods: 0 May 27 00:27:27.894: INFO: Node latest-worker is running more than one daemon pod May 27 00:27:28.895: INFO: Number of nodes with available pods: 1 May 27 00:27:28.895: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6735, will wait for the garbage collector to delete the pods May 27 00:27:28.961: INFO: Deleting DaemonSet.extensions daemon-set took: 7.011987ms May 27 00:27:29.261: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.224146ms May 27 00:27:33.164: INFO: Number of nodes with available pods: 0 May 27 00:27:33.164: INFO: Number of running nodes: 0, number of available pods: 0 May 27 00:27:33.167: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6735/daemonsets","resourceVersion":"7950267"},"items":null} May 27 00:27:33.170: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6735/pods","resourceVersion":"7950267"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:27:33.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6735" for this suite. • [SLOW TEST:27.593 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":146,"skipped":2553,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:27:33.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 27 00:27:33.290: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 27 00:27:33.340: INFO: Waiting for terminating namespaces to be deleted... May 27 00:27:33.343: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 27 00:27:33.347: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 27 00:27:33.347: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 27 00:27:33.347: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 27 00:27:33.347: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 27 00:27:33.347: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 27 00:27:33.347: INFO: Container kindnet-cni ready: true, restart count 2 May 27 00:27:33.347: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 27 00:27:33.347: INFO: Container kube-proxy ready: true, restart count 0 May 27 00:27:33.347: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 27 00:27:33.353: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 27 00:27:33.353: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 27 00:27:33.353: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 27 00:27:33.353: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 27 00:27:33.353: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 27 00:27:33.353: INFO: Container kindnet-cni ready: true, restart count 2 May 27 00:27:33.353: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 27 00:27:33.353: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 27 00:27:33.481: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 27 00:27:33.481: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 27 00:27:33.481: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 27 00:27:33.481: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 27 00:27:33.481: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 27 00:27:33.481: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 27 00:27:33.481: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 27 00:27:33.487: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3db7a5b1-af99-4bd4-b567-d710325d2522.1612bb062935d5f9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2535/filler-pod-3db7a5b1-af99-4bd4-b567-d710325d2522 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3db7a5b1-af99-4bd4-b567-d710325d2522.1612bb06a30641e8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3db7a5b1-af99-4bd4-b567-d710325d2522.1612bb06e7ecbaf6], Reason = [Created], Message = [Created container filler-pod-3db7a5b1-af99-4bd4-b567-d710325d2522] STEP: Considering event: Type = [Normal], Name = [filler-pod-3db7a5b1-af99-4bd4-b567-d710325d2522.1612bb06fa686622], Reason = [Started], Message = [Started container filler-pod-3db7a5b1-af99-4bd4-b567-d710325d2522] STEP: Considering event: Type = [Normal], Name = [filler-pod-7ca76cf5-5a81-4b81-988e-d17f86e5122f.1612bb06275f49fa], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2535/filler-pod-7ca76cf5-5a81-4b81-988e-d17f86e5122f to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-7ca76cf5-5a81-4b81-988e-d17f86e5122f.1612bb0673a8f415], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7ca76cf5-5a81-4b81-988e-d17f86e5122f.1612bb06cba11bc9], Reason = [Created], Message = [Created container filler-pod-7ca76cf5-5a81-4b81-988e-d17f86e5122f] STEP: Considering event: Type = [Normal], Name = [filler-pod-7ca76cf5-5a81-4b81-988e-d17f86e5122f.1612bb06e28c2358], Reason = [Started], Message = [Started container filler-pod-7ca76cf5-5a81-4b81-988e-d17f86e5122f] STEP: Considering event: Type = [Warning], Name = [additional-pod.1612bb079129fabf], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1612bb0794a48403], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:27:40.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2535" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.476 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":147,"skipped":2567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:27:40.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5869 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5869 STEP: creating replication controller externalsvc in namespace services-5869 I0527 00:27:40.959559 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5869, replica count: 2 I0527 00:27:44.009980 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:27:47.010223 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:27:50.010488 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 27 00:27:50.059: INFO: Creating new exec pod May 27 00:27:56.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5869 execpodgghtj -- /bin/sh -x -c nslookup clusterip-service' May 27 00:27:56.441: INFO: stderr: "I0527 00:27:56.235363 1661 log.go:172] (0xc000b0a000) (0xc0000ff900) Create stream\nI0527 00:27:56.235451 1661 log.go:172] (0xc000b0a000) (0xc0000ff900) Stream added, broadcasting: 1\nI0527 00:27:56.237523 1661 log.go:172] (0xc000b0a000) Reply frame received for 1\nI0527 00:27:56.237561 1661 log.go:172] (0xc000b0a000) (0xc0001694a0) Create stream\nI0527 00:27:56.237573 1661 log.go:172] (0xc000b0a000) (0xc0001694a0) Stream added, broadcasting: 3\nI0527 00:27:56.238668 1661 log.go:172] (0xc000b0a000) Reply frame received for 3\nI0527 00:27:56.238709 1661 log.go:172] (0xc000b0a000) (0xc0009f2000) Create stream\nI0527 00:27:56.238725 1661 log.go:172] (0xc000b0a000) (0xc0009f2000) Stream added, broadcasting: 5\nI0527 00:27:56.239575 1661 log.go:172] (0xc000b0a000) Reply frame received for 5\nI0527 00:27:56.334531 1661 log.go:172] (0xc000b0a000) Data frame received for 5\nI0527 00:27:56.334568 1661 log.go:172] (0xc0009f2000) (5) Data frame handling\nI0527 00:27:56.334598 1661 log.go:172] (0xc0009f2000) (5) Data frame sent\n+ nslookup clusterip-service\nI0527 00:27:56.428537 1661 log.go:172] (0xc000b0a000) Data frame received for 3\nI0527 00:27:56.428576 1661 log.go:172] (0xc0001694a0) (3) Data frame handling\nI0527 00:27:56.428599 1661 log.go:172] (0xc0001694a0) (3) Data frame sent\nI0527 00:27:56.429859 1661 log.go:172] (0xc000b0a000) Data frame received for 3\nI0527 00:27:56.429885 1661 log.go:172] (0xc0001694a0) (3) Data frame handling\nI0527 00:27:56.429904 1661 log.go:172] (0xc0001694a0) (3) Data frame sent\nI0527 00:27:56.430550 1661 log.go:172] (0xc000b0a000) Data frame received for 5\nI0527 00:27:56.430738 1661 log.go:172] (0xc000b0a000) Data frame received for 3\nI0527 00:27:56.430784 1661 log.go:172] (0xc0001694a0) (3) Data frame handling\nI0527 00:27:56.430820 1661 log.go:172] (0xc0009f2000) (5) Data frame handling\nI0527 00:27:56.433053 1661 log.go:172] (0xc000b0a000) Data frame received for 1\nI0527 00:27:56.433347 1661 log.go:172] (0xc0000ff900) (1) Data frame handling\nI0527 00:27:56.433497 1661 log.go:172] (0xc0000ff900) (1) Data frame sent\nI0527 00:27:56.433534 1661 log.go:172] (0xc000b0a000) (0xc0000ff900) Stream removed, broadcasting: 1\nI0527 00:27:56.433564 1661 log.go:172] (0xc000b0a000) Go away received\nI0527 00:27:56.433966 1661 log.go:172] (0xc000b0a000) (0xc0000ff900) Stream removed, broadcasting: 1\nI0527 00:27:56.433986 1661 log.go:172] (0xc000b0a000) (0xc0001694a0) Stream removed, broadcasting: 3\nI0527 00:27:56.433997 1661 log.go:172] (0xc000b0a000) (0xc0009f2000) Stream removed, broadcasting: 5\n" May 27 00:27:56.441: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5869.svc.cluster.local\tcanonical name = externalsvc.services-5869.svc.cluster.local.\nName:\texternalsvc.services-5869.svc.cluster.local\nAddress: 10.103.5.83\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5869, will wait for the garbage collector to delete the pods May 27 00:27:56.555: INFO: Deleting ReplicationController externalsvc took: 57.439321ms May 27 00:27:56.655: INFO: Terminating ReplicationController externalsvc pods took: 100.217784ms May 27 00:28:05.380: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:05.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5869" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:24.715 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":148,"skipped":2611,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:05.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:28:05.495: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 27 00:28:08.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2721 create -f -' May 27 00:28:14.449: INFO: stderr: "" May 27 00:28:14.449: INFO: stdout: "e2e-test-crd-publish-openapi-5793-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 27 00:28:14.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2721 delete e2e-test-crd-publish-openapi-5793-crds test-cr' May 27 00:28:14.558: INFO: stderr: "" May 27 00:28:14.558: INFO: stdout: "e2e-test-crd-publish-openapi-5793-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 27 00:28:14.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2721 apply -f -' May 27 00:28:17.027: INFO: stderr: "" May 27 00:28:17.027: INFO: stdout: "e2e-test-crd-publish-openapi-5793-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 27 00:28:17.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2721 delete e2e-test-crd-publish-openapi-5793-crds test-cr' May 27 00:28:17.130: INFO: stderr: "" May 27 00:28:17.130: INFO: stdout: "e2e-test-crd-publish-openapi-5793-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 27 00:28:17.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5793-crds' May 27 00:28:19.584: INFO: stderr: "" May 27 00:28:19.584: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5793-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:22.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2721" for this suite. • [SLOW TEST:17.086 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":149,"skipped":2626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:22.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-7f8308fa-9dd1-4cc7-a9ad-c7520b6d310e STEP: Creating a pod to test consume secrets May 27 00:28:22.639: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771" in namespace "projected-6301" to be "Succeeded or Failed" May 27 00:28:22.666: INFO: Pod "pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771": Phase="Pending", Reason="", readiness=false. Elapsed: 27.42489ms May 27 00:28:24.728: INFO: Pod "pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089420072s May 27 00:28:26.732: INFO: Pod "pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771": Phase="Running", Reason="", readiness=true. Elapsed: 4.093558344s May 27 00:28:28.737: INFO: Pod "pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098633301s STEP: Saw pod success May 27 00:28:28.738: INFO: Pod "pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771" satisfied condition "Succeeded or Failed" May 27 00:28:28.741: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771 container projected-secret-volume-test: STEP: delete the pod May 27 00:28:28.847: INFO: Waiting for pod pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771 to disappear May 27 00:28:28.861: INFO: Pod pod-projected-secrets-6c613175-cde7-4215-9d5f-dbb6987f6771 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:28.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6301" for this suite. • [SLOW TEST:6.366 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":150,"skipped":2665,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:28.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-3c9d90ee-7a91-4231-8abb-7490f8037a73 STEP: Creating a pod to test consume secrets May 27 00:28:29.063: INFO: Waiting up to 5m0s for pod "pod-secrets-f1fa1008-67f4-4292-ab07-aa3ba0b63c08" in namespace "secrets-3242" to be "Succeeded or Failed" May 27 00:28:29.145: INFO: Pod "pod-secrets-f1fa1008-67f4-4292-ab07-aa3ba0b63c08": Phase="Pending", Reason="", readiness=false. Elapsed: 81.837397ms May 27 00:28:31.235: INFO: Pod "pod-secrets-f1fa1008-67f4-4292-ab07-aa3ba0b63c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171315463s May 27 00:28:33.239: INFO: Pod "pod-secrets-f1fa1008-67f4-4292-ab07-aa3ba0b63c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175912866s STEP: Saw pod success May 27 00:28:33.239: INFO: Pod "pod-secrets-f1fa1008-67f4-4292-ab07-aa3ba0b63c08" satisfied condition "Succeeded or Failed" May 27 00:28:33.243: INFO: Trying to get logs from node latest-worker pod pod-secrets-f1fa1008-67f4-4292-ab07-aa3ba0b63c08 container secret-volume-test: STEP: delete the pod May 27 00:28:33.267: INFO: Waiting for pod pod-secrets-f1fa1008-67f4-4292-ab07-aa3ba0b63c08 to disappear May 27 00:28:33.271: INFO: Pod pod-secrets-f1fa1008-67f4-4292-ab07-aa3ba0b63c08 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:33.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3242" for this suite. STEP: Destroying namespace "secret-namespace-3660" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":151,"skipped":2682,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:33.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:39.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4009" for this suite. STEP: Destroying namespace "nsdeletetest-2856" for this suite. May 27 00:28:39.605: INFO: Namespace nsdeletetest-2856 was already deleted STEP: Destroying namespace "nsdeletetest-4944" for this suite. • [SLOW TEST:6.309 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":152,"skipped":2692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:39.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:28:39.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 27 00:28:39.834: INFO: stderr: "" May 27 00:28:39.834: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1498" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":153,"skipped":2745,"failed":0} ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:39.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-a90a0f33-458d-4fd3-9e3f-a2c36129d1d0 STEP: Creating secret with name secret-projected-all-test-volume-6d8b63b1-203d-4fb5-9dc3-caed256f1347 STEP: Creating a pod to test Check all projections for projected volume plugin May 27 00:28:39.990: INFO: Waiting up to 5m0s for pod "projected-volume-67aee529-4b9c-4133-afc8-80405be3443c" in namespace "projected-6969" to be "Succeeded or Failed" May 27 00:28:39.994: INFO: Pod "projected-volume-67aee529-4b9c-4133-afc8-80405be3443c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.404532ms May 27 00:28:42.289: INFO: Pod "projected-volume-67aee529-4b9c-4133-afc8-80405be3443c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298857578s May 27 00:28:44.292: INFO: Pod "projected-volume-67aee529-4b9c-4133-afc8-80405be3443c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.301968284s STEP: Saw pod success May 27 00:28:44.292: INFO: Pod "projected-volume-67aee529-4b9c-4133-afc8-80405be3443c" satisfied condition "Succeeded or Failed" May 27 00:28:44.295: INFO: Trying to get logs from node latest-worker pod projected-volume-67aee529-4b9c-4133-afc8-80405be3443c container projected-all-volume-test: STEP: delete the pod May 27 00:28:44.384: INFO: Waiting for pod projected-volume-67aee529-4b9c-4133-afc8-80405be3443c to disappear May 27 00:28:44.389: INFO: Pod projected-volume-67aee529-4b9c-4133-afc8-80405be3443c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:44.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6969" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":154,"skipped":2745,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:44.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:28:44.448: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939" in namespace "downward-api-6020" to be "Succeeded or Failed" May 27 00:28:44.463: INFO: Pod "downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939": Phase="Pending", Reason="", readiness=false. Elapsed: 15.646609ms May 27 00:28:46.535: INFO: Pod "downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087029462s May 27 00:28:48.539: INFO: Pod "downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091695972s May 27 00:28:50.543: INFO: Pod "downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095601556s STEP: Saw pod success May 27 00:28:50.543: INFO: Pod "downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939" satisfied condition "Succeeded or Failed" May 27 00:28:50.546: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939 container client-container: STEP: delete the pod May 27 00:28:50.587: INFO: Waiting for pod downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939 to disappear May 27 00:28:50.630: INFO: Pod downwardapi-volume-c145be4a-1295-4d12-88b9-9d15d869b939 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:50.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6020" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":155,"skipped":2760,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:50.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:50.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3920" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":156,"skipped":2768,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:50.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:28:54.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6270" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":157,"skipped":2772,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:28:54.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 27 00:29:01.139: INFO: DNS probes using dns-4990/dns-test-107845ed-1b54-412a-8b80-e389d6491633 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:29:01.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4990" for this suite. • [SLOW TEST:6.335 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":158,"skipped":2794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:29:01.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-a8ded803-d47e-4219-9059-2d2691c9e765 STEP: Creating a pod to test consume configMaps May 27 00:29:01.636: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e" in namespace "projected-5035" to be "Succeeded or Failed" May 27 00:29:01.690: INFO: Pod "pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e": Phase="Pending", Reason="", readiness=false. Elapsed: 54.794493ms May 27 00:29:03.695: INFO: Pod "pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059383605s May 27 00:29:05.699: INFO: Pod "pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063372263s May 27 00:29:07.704: INFO: Pod "pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068163268s STEP: Saw pod success May 27 00:29:07.704: INFO: Pod "pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e" satisfied condition "Succeeded or Failed" May 27 00:29:07.706: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e container projected-configmap-volume-test: STEP: delete the pod May 27 00:29:07.764: INFO: Waiting for pod pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e to disappear May 27 00:29:07.774: INFO: Pod pod-projected-configmaps-8b124db8-551b-480b-b4ca-f8b74a69733e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:29:07.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5035" for this suite. • [SLOW TEST:6.496 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":159,"skipped":2862,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:29:07.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 27 00:29:07.836: INFO: Waiting up to 5m0s for pod "pod-2b886e01-4e4e-423c-8662-b9d4bb906a50" in namespace "emptydir-6746" to be "Succeeded or Failed" May 27 00:29:07.906: INFO: Pod "pod-2b886e01-4e4e-423c-8662-b9d4bb906a50": Phase="Pending", Reason="", readiness=false. Elapsed: 69.870954ms May 27 00:29:09.911: INFO: Pod "pod-2b886e01-4e4e-423c-8662-b9d4bb906a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074451453s May 27 00:29:11.915: INFO: Pod "pod-2b886e01-4e4e-423c-8662-b9d4bb906a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078525627s STEP: Saw pod success May 27 00:29:11.915: INFO: Pod "pod-2b886e01-4e4e-423c-8662-b9d4bb906a50" satisfied condition "Succeeded or Failed" May 27 00:29:11.917: INFO: Trying to get logs from node latest-worker2 pod pod-2b886e01-4e4e-423c-8662-b9d4bb906a50 container test-container: STEP: delete the pod May 27 00:29:11.944: INFO: Waiting for pod pod-2b886e01-4e4e-423c-8662-b9d4bb906a50 to disappear May 27 00:29:11.947: INFO: Pod pod-2b886e01-4e4e-423c-8662-b9d4bb906a50 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:29:11.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6746" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":160,"skipped":2867,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:29:11.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:31:12.130: INFO: Deleting pod "var-expansion-ff41a1e8-d854-4ec2-9f71-2ae49840ce42" in namespace "var-expansion-7174" May 27 00:31:12.136: INFO: Wait up to 5m0s for pod "var-expansion-ff41a1e8-d854-4ec2-9f71-2ae49840ce42" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:31:16.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7174" for this suite. • [SLOW TEST:124.218 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":161,"skipped":2906,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:31:16.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9990.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9990.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9990.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9990.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 27 00:31:22.310: INFO: DNS probes using dns-9990/dns-test-6e93a186-fa8b-4293-af67-3825ad475dcf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:31:22.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9990" for this suite. • [SLOW TEST:6.258 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":162,"skipped":2915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:31:22.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:31:33.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1505" for this suite. • [SLOW TEST:11.402 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":163,"skipped":2951,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:31:33.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7235e08a-a847-403c-8fb6-41cb836debc3 STEP: Creating a pod to test consume configMaps May 27 00:31:34.018: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71b748a1-3bd9-4799-9312-ecea5a48b0d7" in namespace "projected-9220" to be "Succeeded or Failed" May 27 00:31:34.024: INFO: Pod "pod-projected-configmaps-71b748a1-3bd9-4799-9312-ecea5a48b0d7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.784357ms May 27 00:31:36.028: INFO: Pod "pod-projected-configmaps-71b748a1-3bd9-4799-9312-ecea5a48b0d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010278252s May 27 00:31:38.033: INFO: Pod "pod-projected-configmaps-71b748a1-3bd9-4799-9312-ecea5a48b0d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014855281s STEP: Saw pod success May 27 00:31:38.033: INFO: Pod "pod-projected-configmaps-71b748a1-3bd9-4799-9312-ecea5a48b0d7" satisfied condition "Succeeded or Failed" May 27 00:31:38.036: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-71b748a1-3bd9-4799-9312-ecea5a48b0d7 container projected-configmap-volume-test: STEP: delete the pod May 27 00:31:38.094: INFO: Waiting for pod pod-projected-configmaps-71b748a1-3bd9-4799-9312-ecea5a48b0d7 to disappear May 27 00:31:38.107: INFO: Pod pod-projected-configmaps-71b748a1-3bd9-4799-9312-ecea5a48b0d7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:31:38.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9220" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2952,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:31:38.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 27 00:31:38.202: INFO: Waiting up to 5m0s for pod "pod-641eb1e1-bef8-4e4b-a05b-fc861bae9065" in namespace "emptydir-9025" to be "Succeeded or Failed" May 27 00:31:38.236: INFO: Pod "pod-641eb1e1-bef8-4e4b-a05b-fc861bae9065": Phase="Pending", Reason="", readiness=false. Elapsed: 34.17799ms May 27 00:31:40.241: INFO: Pod "pod-641eb1e1-bef8-4e4b-a05b-fc861bae9065": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038956834s May 27 00:31:42.246: INFO: Pod "pod-641eb1e1-bef8-4e4b-a05b-fc861bae9065": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043455988s STEP: Saw pod success May 27 00:31:42.246: INFO: Pod "pod-641eb1e1-bef8-4e4b-a05b-fc861bae9065" satisfied condition "Succeeded or Failed" May 27 00:31:42.249: INFO: Trying to get logs from node latest-worker pod pod-641eb1e1-bef8-4e4b-a05b-fc861bae9065 container test-container: STEP: delete the pod May 27 00:31:42.293: INFO: Waiting for pod pod-641eb1e1-bef8-4e4b-a05b-fc861bae9065 to disappear May 27 00:31:42.300: INFO: Pod pod-641eb1e1-bef8-4e4b-a05b-fc861bae9065 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:31:42.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9025" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2956,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:31:42.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4499 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4499 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4499 May 27 00:31:42.472: INFO: Found 0 stateful pods, waiting for 1 May 27 00:31:52.478: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 27 00:31:52.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4499 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 00:31:52.752: INFO: stderr: "I0527 00:31:52.623304 1815 log.go:172] (0xc000021c30) (0xc0005c8dc0) Create stream\nI0527 00:31:52.623360 1815 log.go:172] (0xc000021c30) (0xc0005c8dc0) Stream added, broadcasting: 1\nI0527 00:31:52.626112 1815 log.go:172] (0xc000021c30) Reply frame received for 1\nI0527 00:31:52.626149 1815 log.go:172] (0xc000021c30) (0xc000578640) Create stream\nI0527 00:31:52.626158 1815 log.go:172] (0xc000021c30) (0xc000578640) Stream added, broadcasting: 3\nI0527 00:31:52.626995 1815 log.go:172] (0xc000021c30) Reply frame received for 3\nI0527 00:31:52.627026 1815 log.go:172] (0xc000021c30) (0xc0005c9d60) Create stream\nI0527 00:31:52.627035 1815 log.go:172] (0xc000021c30) (0xc0005c9d60) Stream added, broadcasting: 5\nI0527 00:31:52.627810 1815 log.go:172] (0xc000021c30) Reply frame received for 5\nI0527 00:31:52.711986 1815 log.go:172] (0xc000021c30) Data frame received for 5\nI0527 00:31:52.712026 1815 log.go:172] (0xc0005c9d60) (5) Data frame handling\nI0527 00:31:52.712050 1815 log.go:172] (0xc0005c9d60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 00:31:52.743029 1815 log.go:172] (0xc000021c30) Data frame received for 3\nI0527 00:31:52.743060 1815 log.go:172] (0xc000578640) (3) Data frame handling\nI0527 00:31:52.743073 1815 log.go:172] (0xc000578640) (3) Data frame sent\nI0527 00:31:52.743081 1815 log.go:172] (0xc000021c30) Data frame received for 3\nI0527 00:31:52.743088 1815 log.go:172] (0xc000578640) (3) Data frame handling\nI0527 00:31:52.743471 1815 log.go:172] (0xc000021c30) Data frame received for 5\nI0527 00:31:52.743502 1815 log.go:172] (0xc0005c9d60) (5) Data frame handling\nI0527 00:31:52.745350 1815 log.go:172] (0xc000021c30) Data frame received for 1\nI0527 00:31:52.745364 1815 log.go:172] (0xc0005c8dc0) (1) Data frame handling\nI0527 00:31:52.745381 1815 log.go:172] (0xc0005c8dc0) (1) Data frame sent\nI0527 00:31:52.745751 1815 log.go:172] (0xc000021c30) (0xc0005c8dc0) Stream removed, broadcasting: 1\nI0527 00:31:52.745775 1815 log.go:172] (0xc000021c30) Go away received\nI0527 00:31:52.746233 1815 log.go:172] (0xc000021c30) (0xc0005c8dc0) Stream removed, broadcasting: 1\nI0527 00:31:52.746250 1815 log.go:172] (0xc000021c30) (0xc000578640) Stream removed, broadcasting: 3\nI0527 00:31:52.746256 1815 log.go:172] (0xc000021c30) (0xc0005c9d60) Stream removed, broadcasting: 5\n" May 27 00:31:52.752: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 00:31:52.752: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 00:31:52.755: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 27 00:32:02.760: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 27 00:32:02.760: INFO: Waiting for statefulset status.replicas updated to 0 May 27 00:32:02.792: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999449s May 27 00:32:03.796: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.979969583s May 27 00:32:04.801: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975335534s May 27 00:32:05.806: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970684875s May 27 00:32:06.811: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965835159s May 27 00:32:07.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.960891105s May 27 00:32:08.829: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.948528117s May 27 00:32:09.834: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.943037035s May 27 00:32:10.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.937789005s May 27 00:32:11.843: INFO: Verifying statefulset ss doesn't scale past 1 for another 933.507624ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4499 May 27 00:32:12.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4499 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 00:32:13.080: INFO: stderr: "I0527 00:32:12.976750 1837 log.go:172] (0xc0004f0370) (0xc0006ca500) Create stream\nI0527 00:32:12.976805 1837 log.go:172] (0xc0004f0370) (0xc0006ca500) Stream added, broadcasting: 1\nI0527 00:32:12.979627 1837 log.go:172] (0xc0004f0370) Reply frame received for 1\nI0527 00:32:12.979680 1837 log.go:172] (0xc0004f0370) (0xc0006d2500) Create stream\nI0527 00:32:12.979697 1837 log.go:172] (0xc0004f0370) (0xc0006d2500) Stream added, broadcasting: 3\nI0527 00:32:12.980780 1837 log.go:172] (0xc0004f0370) Reply frame received for 3\nI0527 00:32:12.980826 1837 log.go:172] (0xc0004f0370) (0xc0006caa00) Create stream\nI0527 00:32:12.980850 1837 log.go:172] (0xc0004f0370) (0xc0006caa00) Stream added, broadcasting: 5\nI0527 00:32:12.982127 1837 log.go:172] (0xc0004f0370) Reply frame received for 5\nI0527 00:32:13.072423 1837 log.go:172] (0xc0004f0370) Data frame received for 5\nI0527 00:32:13.072448 1837 log.go:172] (0xc0006caa00) (5) Data frame handling\nI0527 00:32:13.072465 1837 log.go:172] (0xc0006caa00) (5) Data frame sent\nI0527 00:32:13.072476 1837 log.go:172] (0xc0004f0370) Data frame received for 5\nI0527 00:32:13.072485 1837 log.go:172] (0xc0006caa00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0527 00:32:13.072658 1837 log.go:172] (0xc0004f0370) Data frame received for 3\nI0527 00:32:13.072679 1837 log.go:172] (0xc0006d2500) (3) Data frame handling\nI0527 00:32:13.072704 1837 log.go:172] (0xc0006d2500) (3) Data frame sent\nI0527 00:32:13.072822 1837 log.go:172] (0xc0004f0370) Data frame received for 3\nI0527 00:32:13.072841 1837 log.go:172] (0xc0006d2500) (3) Data frame handling\nI0527 00:32:13.074305 1837 log.go:172] (0xc0004f0370) Data frame received for 1\nI0527 00:32:13.074422 1837 log.go:172] (0xc0006ca500) (1) Data frame handling\nI0527 00:32:13.074468 1837 log.go:172] (0xc0006ca500) (1) Data frame sent\nI0527 00:32:13.074498 1837 log.go:172] (0xc0004f0370) (0xc0006ca500) Stream removed, broadcasting: 1\nI0527 00:32:13.074569 1837 log.go:172] (0xc0004f0370) Go away received\nI0527 00:32:13.074976 1837 log.go:172] (0xc0004f0370) (0xc0006ca500) Stream removed, broadcasting: 1\nI0527 00:32:13.075009 1837 log.go:172] (0xc0004f0370) (0xc0006d2500) Stream removed, broadcasting: 3\nI0527 00:32:13.075031 1837 log.go:172] (0xc0004f0370) (0xc0006caa00) Stream removed, broadcasting: 5\n" May 27 00:32:13.080: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 00:32:13.080: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 27 00:32:13.084: INFO: Found 1 stateful pods, waiting for 3 May 27 00:32:23.091: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 27 00:32:23.091: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 27 00:32:23.091: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 27 00:32:23.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4499 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 00:32:23.343: INFO: stderr: "I0527 00:32:23.241774 1859 log.go:172] (0xc000698420) (0xc000538e60) Create stream\nI0527 00:32:23.241855 1859 log.go:172] (0xc000698420) (0xc000538e60) Stream added, broadcasting: 1\nI0527 00:32:23.244322 1859 log.go:172] (0xc000698420) Reply frame received for 1\nI0527 00:32:23.244359 1859 log.go:172] (0xc000698420) (0xc00030c960) Create stream\nI0527 00:32:23.244383 1859 log.go:172] (0xc000698420) (0xc00030c960) Stream added, broadcasting: 3\nI0527 00:32:23.245688 1859 log.go:172] (0xc000698420) Reply frame received for 3\nI0527 00:32:23.245744 1859 log.go:172] (0xc000698420) (0xc0003b0000) Create stream\nI0527 00:32:23.245758 1859 log.go:172] (0xc000698420) (0xc0003b0000) Stream added, broadcasting: 5\nI0527 00:32:23.246888 1859 log.go:172] (0xc000698420) Reply frame received for 5\nI0527 00:32:23.336913 1859 log.go:172] (0xc000698420) Data frame received for 3\nI0527 00:32:23.336968 1859 log.go:172] (0xc000698420) Data frame received for 1\nI0527 00:32:23.336997 1859 log.go:172] (0xc000538e60) (1) Data frame handling\nI0527 00:32:23.337016 1859 log.go:172] (0xc000538e60) (1) Data frame sent\nI0527 00:32:23.337040 1859 log.go:172] (0xc000698420) (0xc000538e60) Stream removed, broadcasting: 1\nI0527 00:32:23.337079 1859 log.go:172] (0xc00030c960) (3) Data frame handling\nI0527 00:32:23.337092 1859 log.go:172] (0xc00030c960) (3) Data frame sent\nI0527 00:32:23.337104 1859 log.go:172] (0xc000698420) Data frame received for 3\nI0527 00:32:23.337308 1859 log.go:172] (0xc000698420) Data frame received for 5\nI0527 00:32:23.337370 1859 log.go:172] (0xc0003b0000) (5) Data frame handling\nI0527 00:32:23.337393 1859 log.go:172] (0xc0003b0000) (5) Data frame sent\nI0527 00:32:23.337415 1859 log.go:172] (0xc000698420) Data frame received for 5\nI0527 00:32:23.337430 1859 log.go:172] (0xc0003b0000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 00:32:23.337452 1859 log.go:172] (0xc00030c960) (3) Data frame handling\nI0527 00:32:23.337477 1859 log.go:172] (0xc000698420) Go away received\nI0527 00:32:23.337628 1859 log.go:172] (0xc000698420) (0xc000538e60) Stream removed, broadcasting: 1\nI0527 00:32:23.337649 1859 log.go:172] (0xc000698420) (0xc00030c960) Stream removed, broadcasting: 3\nI0527 00:32:23.337666 1859 log.go:172] (0xc000698420) (0xc0003b0000) Stream removed, broadcasting: 5\n" May 27 00:32:23.343: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 00:32:23.343: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 00:32:23.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4499 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 00:32:23.593: INFO: stderr: "I0527 00:32:23.466576 1880 log.go:172] (0xc000930420) (0xc0003adae0) Create stream\nI0527 00:32:23.466633 1880 log.go:172] (0xc000930420) (0xc0003adae0) Stream added, broadcasting: 1\nI0527 00:32:23.469075 1880 log.go:172] (0xc000930420) Reply frame received for 1\nI0527 00:32:23.469321 1880 log.go:172] (0xc000930420) (0xc0007983c0) Create stream\nI0527 00:32:23.469353 1880 log.go:172] (0xc000930420) (0xc0007983c0) Stream added, broadcasting: 3\nI0527 00:32:23.470366 1880 log.go:172] (0xc000930420) Reply frame received for 3\nI0527 00:32:23.470399 1880 log.go:172] (0xc000930420) (0xc0002920a0) Create stream\nI0527 00:32:23.470421 1880 log.go:172] (0xc000930420) (0xc0002920a0) Stream added, broadcasting: 5\nI0527 00:32:23.471474 1880 log.go:172] (0xc000930420) Reply frame received for 5\nI0527 00:32:23.557081 1880 log.go:172] (0xc000930420) Data frame received for 5\nI0527 00:32:23.557311 1880 log.go:172] (0xc0002920a0) (5) Data frame handling\nI0527 00:32:23.557330 1880 log.go:172] (0xc0002920a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 00:32:23.585042 1880 log.go:172] (0xc000930420) Data frame received for 5\nI0527 00:32:23.585083 1880 log.go:172] (0xc0002920a0) (5) Data frame handling\nI0527 00:32:23.585321 1880 log.go:172] (0xc000930420) Data frame received for 3\nI0527 00:32:23.585363 1880 log.go:172] (0xc0007983c0) (3) Data frame handling\nI0527 00:32:23.585395 1880 log.go:172] (0xc0007983c0) (3) Data frame sent\nI0527 00:32:23.585416 1880 log.go:172] (0xc000930420) Data frame received for 3\nI0527 00:32:23.585433 1880 log.go:172] (0xc0007983c0) (3) Data frame handling\nI0527 00:32:23.586671 1880 log.go:172] (0xc000930420) Data frame received for 1\nI0527 00:32:23.586702 1880 log.go:172] (0xc0003adae0) (1) Data frame handling\nI0527 00:32:23.586716 1880 log.go:172] (0xc0003adae0) (1) Data frame sent\nI0527 00:32:23.586731 1880 log.go:172] (0xc000930420) (0xc0003adae0) Stream removed, broadcasting: 1\nI0527 00:32:23.587057 1880 log.go:172] (0xc000930420) (0xc0003adae0) Stream removed, broadcasting: 1\nI0527 00:32:23.587075 1880 log.go:172] (0xc000930420) (0xc0007983c0) Stream removed, broadcasting: 3\nI0527 00:32:23.587097 1880 log.go:172] (0xc000930420) (0xc0002920a0) Stream removed, broadcasting: 5\n" May 27 00:32:23.593: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 00:32:23.593: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 00:32:23.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4499 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 00:32:23.853: INFO: stderr: "I0527 00:32:23.715933 1903 log.go:172] (0xc000906580) (0xc0001377c0) Create stream\nI0527 00:32:23.715979 1903 log.go:172] (0xc000906580) (0xc0001377c0) Stream added, broadcasting: 1\nI0527 00:32:23.718479 1903 log.go:172] (0xc000906580) Reply frame received for 1\nI0527 00:32:23.718519 1903 log.go:172] (0xc000906580) (0xc000694b40) Create stream\nI0527 00:32:23.718529 1903 log.go:172] (0xc000906580) (0xc000694b40) Stream added, broadcasting: 3\nI0527 00:32:23.719311 1903 log.go:172] (0xc000906580) Reply frame received for 3\nI0527 00:32:23.719338 1903 log.go:172] (0xc000906580) (0xc0000dd040) Create stream\nI0527 00:32:23.719346 1903 log.go:172] (0xc000906580) (0xc0000dd040) Stream added, broadcasting: 5\nI0527 00:32:23.720002 1903 log.go:172] (0xc000906580) Reply frame received for 5\nI0527 00:32:23.785937 1903 log.go:172] (0xc000906580) Data frame received for 5\nI0527 00:32:23.785964 1903 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0527 00:32:23.785984 1903 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 00:32:23.843319 1903 log.go:172] (0xc000906580) Data frame received for 3\nI0527 00:32:23.843491 1903 log.go:172] (0xc000694b40) (3) Data frame handling\nI0527 00:32:23.843589 1903 log.go:172] (0xc000694b40) (3) Data frame sent\nI0527 00:32:23.843657 1903 log.go:172] (0xc000906580) Data frame received for 3\nI0527 00:32:23.843819 1903 log.go:172] (0xc000694b40) (3) Data frame handling\nI0527 00:32:23.843855 1903 log.go:172] (0xc000906580) Data frame received for 5\nI0527 00:32:23.843868 1903 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0527 00:32:23.847389 1903 log.go:172] (0xc000906580) Data frame received for 1\nI0527 00:32:23.847413 1903 log.go:172] (0xc0001377c0) (1) Data frame handling\nI0527 00:32:23.847441 1903 log.go:172] (0xc0001377c0) (1) Data frame sent\nI0527 00:32:23.847459 1903 log.go:172] (0xc000906580) (0xc0001377c0) Stream removed, broadcasting: 1\nI0527 00:32:23.847665 1903 log.go:172] (0xc000906580) Go away received\nI0527 00:32:23.847776 1903 log.go:172] (0xc000906580) (0xc0001377c0) Stream removed, broadcasting: 1\nI0527 00:32:23.847794 1903 log.go:172] (0xc000906580) (0xc000694b40) Stream removed, broadcasting: 3\nI0527 00:32:23.847807 1903 log.go:172] (0xc000906580) (0xc0000dd040) Stream removed, broadcasting: 5\n" May 27 00:32:23.853: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 00:32:23.853: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 00:32:23.853: INFO: Waiting for statefulset status.replicas updated to 0 May 27 00:32:23.856: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 27 00:32:33.865: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 27 00:32:33.865: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 27 00:32:33.865: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 27 00:32:33.915: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999631s May 27 00:32:34.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.956279655s May 27 00:32:35.927: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.950717432s May 27 00:32:36.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.944755687s May 27 00:32:37.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.940279134s May 27 00:32:38.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.934381853s May 27 00:32:39.948: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.928865488s May 27 00:32:40.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.923310246s May 27 00:32:41.960: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.917719568s May 27 00:32:42.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 911.970825ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4499 May 27 00:32:43.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4499 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 00:32:44.267: INFO: stderr: "I0527 00:32:44.149466 1926 log.go:172] (0xc000b1f600) (0xc000ad21e0) Create stream\nI0527 00:32:44.149563 1926 log.go:172] (0xc000b1f600) (0xc000ad21e0) Stream added, broadcasting: 1\nI0527 00:32:44.154207 1926 log.go:172] (0xc000b1f600) Reply frame received for 1\nI0527 00:32:44.154235 1926 log.go:172] (0xc000b1f600) (0xc0008445a0) Create stream\nI0527 00:32:44.154243 1926 log.go:172] (0xc000b1f600) (0xc0008445a0) Stream added, broadcasting: 3\nI0527 00:32:44.155139 1926 log.go:172] (0xc000b1f600) Reply frame received for 3\nI0527 00:32:44.155186 1926 log.go:172] (0xc000b1f600) (0xc000702c80) Create stream\nI0527 00:32:44.155207 1926 log.go:172] (0xc000b1f600) (0xc000702c80) Stream added, broadcasting: 5\nI0527 00:32:44.156219 1926 log.go:172] (0xc000b1f600) Reply frame received for 5\nI0527 00:32:44.258285 1926 log.go:172] (0xc000b1f600) Data frame received for 3\nI0527 00:32:44.258331 1926 log.go:172] (0xc0008445a0) (3) Data frame handling\nI0527 00:32:44.258355 1926 log.go:172] (0xc0008445a0) (3) Data frame sent\nI0527 00:32:44.258367 1926 log.go:172] (0xc000b1f600) Data frame received for 3\nI0527 00:32:44.258377 1926 log.go:172] (0xc0008445a0) (3) Data frame handling\nI0527 00:32:44.258442 1926 log.go:172] (0xc000b1f600) Data frame received for 5\nI0527 00:32:44.258496 1926 log.go:172] (0xc000702c80) (5) Data frame handling\nI0527 00:32:44.258522 1926 log.go:172] (0xc000702c80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0527 00:32:44.258542 1926 log.go:172] (0xc000b1f600) Data frame received for 5\nI0527 00:32:44.258557 1926 log.go:172] (0xc000702c80) (5) Data frame handling\nI0527 00:32:44.259965 1926 log.go:172] (0xc000b1f600) Data frame received for 1\nI0527 00:32:44.260066 1926 log.go:172] (0xc000ad21e0) (1) Data frame handling\nI0527 00:32:44.260100 1926 log.go:172] (0xc000ad21e0) (1) Data frame sent\nI0527 00:32:44.260144 1926 log.go:172] (0xc000b1f600) (0xc000ad21e0) Stream removed, broadcasting: 1\nI0527 00:32:44.260186 1926 log.go:172] (0xc000b1f600) Go away received\nI0527 00:32:44.260517 1926 log.go:172] (0xc000b1f600) (0xc000ad21e0) Stream removed, broadcasting: 1\nI0527 00:32:44.260540 1926 log.go:172] (0xc000b1f600) (0xc0008445a0) Stream removed, broadcasting: 3\nI0527 00:32:44.260553 1926 log.go:172] (0xc000b1f600) (0xc000702c80) Stream removed, broadcasting: 5\n" May 27 00:32:44.267: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 00:32:44.267: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 27 00:32:44.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4499 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 00:32:44.459: INFO: stderr: "I0527 00:32:44.394446 1947 log.go:172] (0xc00003a420) (0xc000572460) Create stream\nI0527 00:32:44.394503 1947 log.go:172] (0xc00003a420) (0xc000572460) Stream added, broadcasting: 1\nI0527 00:32:44.396926 1947 log.go:172] (0xc00003a420) Reply frame received for 1\nI0527 00:32:44.396967 1947 log.go:172] (0xc00003a420) (0xc00054a140) Create stream\nI0527 00:32:44.396978 1947 log.go:172] (0xc00003a420) (0xc00054a140) Stream added, broadcasting: 3\nI0527 00:32:44.398054 1947 log.go:172] (0xc00003a420) Reply frame received for 3\nI0527 00:32:44.398087 1947 log.go:172] (0xc00003a420) (0xc00054b0e0) Create stream\nI0527 00:32:44.398104 1947 log.go:172] (0xc00003a420) (0xc00054b0e0) Stream added, broadcasting: 5\nI0527 00:32:44.399065 1947 log.go:172] (0xc00003a420) Reply frame received for 5\nI0527 00:32:44.451945 1947 log.go:172] (0xc00003a420) Data frame received for 5\nI0527 00:32:44.451990 1947 log.go:172] (0xc00054b0e0) (5) Data frame handling\nI0527 00:32:44.452014 1947 log.go:172] (0xc00054b0e0) (5) Data frame sent\nI0527 00:32:44.452030 1947 log.go:172] (0xc00003a420) Data frame received for 5\nI0527 00:32:44.452041 1947 log.go:172] (0xc00054b0e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0527 00:32:44.452074 1947 log.go:172] (0xc00003a420) Data frame received for 3\nI0527 00:32:44.452088 1947 log.go:172] (0xc00054a140) (3) Data frame handling\nI0527 00:32:44.452099 1947 log.go:172] (0xc00054a140) (3) Data frame sent\nI0527 00:32:44.452110 1947 log.go:172] (0xc00003a420) Data frame received for 3\nI0527 00:32:44.452121 1947 log.go:172] (0xc00054a140) (3) Data frame handling\nI0527 00:32:44.453828 1947 log.go:172] (0xc00003a420) Data frame received for 1\nI0527 00:32:44.453864 1947 log.go:172] (0xc000572460) (1) Data frame handling\nI0527 00:32:44.453887 1947 log.go:172] (0xc000572460) (1) Data frame sent\nI0527 00:32:44.453907 1947 log.go:172] (0xc00003a420) (0xc000572460) Stream removed, broadcasting: 1\nI0527 00:32:44.453924 1947 log.go:172] (0xc00003a420) Go away received\nI0527 00:32:44.454350 1947 log.go:172] (0xc00003a420) (0xc000572460) Stream removed, broadcasting: 1\nI0527 00:32:44.454402 1947 log.go:172] (0xc00003a420) (0xc00054a140) Stream removed, broadcasting: 3\nI0527 00:32:44.454423 1947 log.go:172] (0xc00003a420) (0xc00054b0e0) Stream removed, broadcasting: 5\n" May 27 00:32:44.459: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 00:32:44.459: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 27 00:32:44.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4499 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 00:32:44.651: INFO: stderr: "I0527 00:32:44.589643 1966 log.go:172] (0xc0000e8370) (0xc00057e280) Create stream\nI0527 00:32:44.589717 1966 log.go:172] (0xc0000e8370) (0xc00057e280) Stream added, broadcasting: 1\nI0527 00:32:44.591286 1966 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0527 00:32:44.591348 1966 log.go:172] (0xc0000e8370) (0xc000534dc0) Create stream\nI0527 00:32:44.591373 1966 log.go:172] (0xc0000e8370) (0xc000534dc0) Stream added, broadcasting: 3\nI0527 00:32:44.592254 1966 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0527 00:32:44.592292 1966 log.go:172] (0xc0000e8370) (0xc0006d8aa0) Create stream\nI0527 00:32:44.592311 1966 log.go:172] (0xc0000e8370) (0xc0006d8aa0) Stream added, broadcasting: 5\nI0527 00:32:44.593288 1966 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0527 00:32:44.643233 1966 log.go:172] (0xc0000e8370) Data frame received for 5\nI0527 00:32:44.643270 1966 log.go:172] (0xc0006d8aa0) (5) Data frame handling\nI0527 00:32:44.643287 1966 log.go:172] (0xc0006d8aa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0527 00:32:44.643299 1966 log.go:172] (0xc0000e8370) Data frame received for 5\nI0527 00:32:44.643325 1966 log.go:172] (0xc0006d8aa0) (5) Data frame handling\nI0527 00:32:44.643358 1966 log.go:172] (0xc0000e8370) Data frame received for 3\nI0527 00:32:44.643381 1966 log.go:172] (0xc000534dc0) (3) Data frame handling\nI0527 00:32:44.643399 1966 log.go:172] (0xc000534dc0) (3) Data frame sent\nI0527 00:32:44.643415 1966 log.go:172] (0xc0000e8370) Data frame received for 3\nI0527 00:32:44.643423 1966 log.go:172] (0xc000534dc0) (3) Data frame handling\nI0527 00:32:44.644932 1966 log.go:172] (0xc0000e8370) Data frame received for 1\nI0527 00:32:44.644972 1966 log.go:172] (0xc00057e280) (1) Data frame handling\nI0527 00:32:44.644986 1966 log.go:172] (0xc00057e280) (1) Data frame sent\nI0527 00:32:44.645002 1966 log.go:172] (0xc0000e8370) (0xc00057e280) Stream removed, broadcasting: 1\nI0527 00:32:44.645020 1966 log.go:172] (0xc0000e8370) Go away received\nI0527 00:32:44.645694 1966 log.go:172] (0xc0000e8370) (0xc00057e280) Stream removed, broadcasting: 1\nI0527 00:32:44.645715 1966 log.go:172] (0xc0000e8370) (0xc000534dc0) Stream removed, broadcasting: 3\nI0527 00:32:44.645726 1966 log.go:172] (0xc0000e8370) (0xc0006d8aa0) Stream removed, broadcasting: 5\n" May 27 00:32:44.651: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 00:32:44.651: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 27 00:32:44.651: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 27 00:33:14.674: INFO: Deleting all statefulset in ns statefulset-4499 May 27 00:33:14.678: INFO: Scaling statefulset ss to 0 May 27 00:33:14.688: INFO: Waiting for statefulset status.replicas updated to 0 May 27 00:33:14.691: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:33:14.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4499" for this suite. • [SLOW TEST:92.379 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":166,"skipped":2967,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:33:14.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:33:28.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4327" for this suite. • [SLOW TEST:14.232 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":167,"skipped":2981,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:33:28.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 27 00:33:29.042: INFO: Waiting up to 5m0s for pod "pod-d55b2257-a9fe-498f-8e1f-e95b1dd0fb4e" in namespace "emptydir-4390" to be "Succeeded or Failed" May 27 00:33:29.106: INFO: Pod "pod-d55b2257-a9fe-498f-8e1f-e95b1dd0fb4e": Phase="Pending", Reason="", readiness=false. Elapsed: 63.650174ms May 27 00:33:31.218: INFO: Pod "pod-d55b2257-a9fe-498f-8e1f-e95b1dd0fb4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175857439s May 27 00:33:33.222: INFO: Pod "pod-d55b2257-a9fe-498f-8e1f-e95b1dd0fb4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179464774s STEP: Saw pod success May 27 00:33:33.222: INFO: Pod "pod-d55b2257-a9fe-498f-8e1f-e95b1dd0fb4e" satisfied condition "Succeeded or Failed" May 27 00:33:33.224: INFO: Trying to get logs from node latest-worker pod pod-d55b2257-a9fe-498f-8e1f-e95b1dd0fb4e container test-container: STEP: delete the pod May 27 00:33:33.317: INFO: Waiting for pod pod-d55b2257-a9fe-498f-8e1f-e95b1dd0fb4e to disappear May 27 00:33:33.327: INFO: Pod pod-d55b2257-a9fe-498f-8e1f-e95b1dd0fb4e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:33:33.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4390" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":168,"skipped":3000,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:33:33.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4cf63edc-d64d-4ab8-a65c-bdedcba55fe7 STEP: Creating a pod to test consume secrets May 27 00:33:33.443: INFO: Waiting up to 5m0s for pod "pod-secrets-1aa40053-1225-450e-95a1-495421b35299" in namespace "secrets-1908" to be "Succeeded or Failed" May 27 00:33:33.447: INFO: Pod "pod-secrets-1aa40053-1225-450e-95a1-495421b35299": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029469ms May 27 00:33:37.157: INFO: Pod "pod-secrets-1aa40053-1225-450e-95a1-495421b35299": Phase="Pending", Reason="", readiness=false. Elapsed: 3.713695611s May 27 00:33:39.213: INFO: Pod "pod-secrets-1aa40053-1225-450e-95a1-495421b35299": Phase="Pending", Reason="", readiness=false. Elapsed: 5.770365645s May 27 00:33:41.218: INFO: Pod "pod-secrets-1aa40053-1225-450e-95a1-495421b35299": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.775030673s STEP: Saw pod success May 27 00:33:41.218: INFO: Pod "pod-secrets-1aa40053-1225-450e-95a1-495421b35299" satisfied condition "Succeeded or Failed" May 27 00:33:41.221: INFO: Trying to get logs from node latest-worker pod pod-secrets-1aa40053-1225-450e-95a1-495421b35299 container secret-volume-test: STEP: delete the pod May 27 00:33:41.616: INFO: Waiting for pod pod-secrets-1aa40053-1225-450e-95a1-495421b35299 to disappear May 27 00:33:41.646: INFO: Pod pod-secrets-1aa40053-1225-450e-95a1-495421b35299 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:33:41.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1908" for this suite. • [SLOW TEST:8.324 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":3049,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:33:41.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-191b3482-4c79-44dc-98b2-57276ac568f7 in namespace container-probe-528 May 27 00:33:45.775: INFO: Started pod test-webserver-191b3482-4c79-44dc-98b2-57276ac568f7 in namespace container-probe-528 STEP: checking the pod's current state and verifying that restartCount is present May 27 00:33:45.776: INFO: Initial restart count of pod test-webserver-191b3482-4c79-44dc-98b2-57276ac568f7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:37:46.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-528" for this suite. • [SLOW TEST:244.743 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":3060,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:37:46.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:37:46.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-773ce060-fe2c-4321-9a17-3c8b5adb92f5" in namespace "projected-3238" to be "Succeeded or Failed" May 27 00:37:46.815: INFO: Pod "downwardapi-volume-773ce060-fe2c-4321-9a17-3c8b5adb92f5": Phase="Pending", Reason="", readiness=false. Elapsed: 349.520013ms May 27 00:37:48.820: INFO: Pod "downwardapi-volume-773ce060-fe2c-4321-9a17-3c8b5adb92f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353971826s May 27 00:37:50.825: INFO: Pod "downwardapi-volume-773ce060-fe2c-4321-9a17-3c8b5adb92f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.35924706s STEP: Saw pod success May 27 00:37:50.825: INFO: Pod "downwardapi-volume-773ce060-fe2c-4321-9a17-3c8b5adb92f5" satisfied condition "Succeeded or Failed" May 27 00:37:50.828: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-773ce060-fe2c-4321-9a17-3c8b5adb92f5 container client-container: STEP: delete the pod May 27 00:37:50.879: INFO: Waiting for pod downwardapi-volume-773ce060-fe2c-4321-9a17-3c8b5adb92f5 to disappear May 27 00:37:50.890: INFO: Pod downwardapi-volume-773ce060-fe2c-4321-9a17-3c8b5adb92f5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:37:50.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3238" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":3079,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:37:50.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:37:51.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-812" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":172,"skipped":3088,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:37:51.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-bc6efbd9-0233-42b7-a065-b94622a81a16 in namespace container-probe-5662 May 27 00:37:55.186: INFO: Started pod busybox-bc6efbd9-0233-42b7-a065-b94622a81a16 in namespace container-probe-5662 STEP: checking the pod's current state and verifying that restartCount is present May 27 00:37:55.190: INFO: Initial restart count of pod busybox-bc6efbd9-0233-42b7-a065-b94622a81a16 is 0 May 27 00:38:50.358: INFO: Restart count of pod container-probe-5662/busybox-bc6efbd9-0233-42b7-a065-b94622a81a16 is now 1 (55.168032764s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:38:50.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5662" for this suite. • [SLOW TEST:59.444 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":3092,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:38:50.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:39:07.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-408" for this suite. • [SLOW TEST:17.227 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":174,"skipped":3094,"failed":0} [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:39:07.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:39:07.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6101" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":175,"skipped":3094,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:39:07.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-664 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-664 I0527 00:39:08.111234 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-664, replica count: 2 I0527 00:39:11.161703 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:39:14.162071 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 00:39:14.162: INFO: Creating new exec pod May 27 00:39:19.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-664 execpodtlsj7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 27 00:39:22.202: INFO: stderr: "I0527 00:39:22.079578 1988 log.go:172] (0xc00068c000) (0xc00068aa00) Create stream\nI0527 00:39:22.079639 1988 log.go:172] (0xc00068c000) (0xc00068aa00) Stream added, broadcasting: 1\nI0527 00:39:22.082826 1988 log.go:172] (0xc00068c000) Reply frame received for 1\nI0527 00:39:22.082873 1988 log.go:172] (0xc00068c000) (0xc00065cc80) Create stream\nI0527 00:39:22.082888 1988 log.go:172] (0xc00068c000) (0xc00065cc80) Stream added, broadcasting: 3\nI0527 00:39:22.083939 1988 log.go:172] (0xc00068c000) Reply frame received for 3\nI0527 00:39:22.084008 1988 log.go:172] (0xc00068c000) (0xc000634500) Create stream\nI0527 00:39:22.084038 1988 log.go:172] (0xc00068c000) (0xc000634500) Stream added, broadcasting: 5\nI0527 00:39:22.084947 1988 log.go:172] (0xc00068c000) Reply frame received for 5\nI0527 00:39:22.168445 1988 log.go:172] (0xc00068c000) Data frame received for 5\nI0527 00:39:22.168477 1988 log.go:172] (0xc000634500) (5) Data frame handling\nI0527 00:39:22.168495 1988 log.go:172] (0xc000634500) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0527 00:39:22.194456 1988 log.go:172] (0xc00068c000) Data frame received for 5\nI0527 00:39:22.194486 1988 log.go:172] (0xc000634500) (5) Data frame handling\nI0527 00:39:22.194504 1988 log.go:172] (0xc000634500) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0527 00:39:22.194543 1988 log.go:172] (0xc00068c000) Data frame received for 3\nI0527 00:39:22.194576 1988 log.go:172] (0xc00065cc80) (3) Data frame handling\nI0527 00:39:22.195060 1988 log.go:172] (0xc00068c000) Data frame received for 5\nI0527 00:39:22.195089 1988 log.go:172] (0xc000634500) (5) Data frame handling\nI0527 00:39:22.197007 1988 log.go:172] (0xc00068c000) Data frame received for 1\nI0527 00:39:22.197036 1988 log.go:172] (0xc00068aa00) (1) Data frame handling\nI0527 00:39:22.197057 1988 log.go:172] (0xc00068aa00) (1) Data frame sent\nI0527 00:39:22.197088 1988 log.go:172] (0xc00068c000) (0xc00068aa00) Stream removed, broadcasting: 1\nI0527 00:39:22.197398 1988 log.go:172] (0xc00068c000) Go away received\nI0527 00:39:22.197825 1988 log.go:172] (0xc00068c000) (0xc00068aa00) Stream removed, broadcasting: 1\nI0527 00:39:22.197866 1988 log.go:172] (0xc00068c000) (0xc00065cc80) Stream removed, broadcasting: 3\nI0527 00:39:22.197891 1988 log.go:172] (0xc00068c000) (0xc000634500) Stream removed, broadcasting: 5\n" May 27 00:39:22.202: INFO: stdout: "" May 27 00:39:22.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-664 execpodtlsj7 -- /bin/sh -x -c nc -zv -t -w 2 10.100.137.88 80' May 27 00:39:22.403: INFO: stderr: "I0527 00:39:22.335743 2017 log.go:172] (0xc000adfa20) (0xc000a626e0) Create stream\nI0527 00:39:22.335900 2017 log.go:172] (0xc000adfa20) (0xc000a626e0) Stream added, broadcasting: 1\nI0527 00:39:22.340517 2017 log.go:172] (0xc000adfa20) Reply frame received for 1\nI0527 00:39:22.340592 2017 log.go:172] (0xc000adfa20) (0xc00024c0a0) Create stream\nI0527 00:39:22.340607 2017 log.go:172] (0xc000adfa20) (0xc00024c0a0) Stream added, broadcasting: 3\nI0527 00:39:22.342030 2017 log.go:172] (0xc000adfa20) Reply frame received for 3\nI0527 00:39:22.342067 2017 log.go:172] (0xc000adfa20) (0xc000a62780) Create stream\nI0527 00:39:22.342077 2017 log.go:172] (0xc000adfa20) (0xc000a62780) Stream added, broadcasting: 5\nI0527 00:39:22.342906 2017 log.go:172] (0xc000adfa20) Reply frame received for 5\nI0527 00:39:22.396046 2017 log.go:172] (0xc000adfa20) Data frame received for 3\nI0527 00:39:22.396159 2017 log.go:172] (0xc00024c0a0) (3) Data frame handling\nI0527 00:39:22.396202 2017 log.go:172] (0xc000adfa20) Data frame received for 5\nI0527 00:39:22.396211 2017 log.go:172] (0xc000a62780) (5) Data frame handling\nI0527 00:39:22.396223 2017 log.go:172] (0xc000a62780) (5) Data frame sent\nI0527 00:39:22.396233 2017 log.go:172] (0xc000adfa20) Data frame received for 5\nI0527 00:39:22.396240 2017 log.go:172] (0xc000a62780) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.137.88 80\nConnection to 10.100.137.88 80 port [tcp/http] succeeded!\nI0527 00:39:22.397965 2017 log.go:172] (0xc000adfa20) Data frame received for 1\nI0527 00:39:22.397998 2017 log.go:172] (0xc000a626e0) (1) Data frame handling\nI0527 00:39:22.398035 2017 log.go:172] (0xc000a626e0) (1) Data frame sent\nI0527 00:39:22.398065 2017 log.go:172] (0xc000adfa20) (0xc000a626e0) Stream removed, broadcasting: 1\nI0527 00:39:22.398107 2017 log.go:172] (0xc000adfa20) Go away received\nI0527 00:39:22.398419 2017 log.go:172] (0xc000adfa20) (0xc000a626e0) Stream removed, broadcasting: 1\nI0527 00:39:22.398434 2017 log.go:172] (0xc000adfa20) (0xc00024c0a0) Stream removed, broadcasting: 3\nI0527 00:39:22.398441 2017 log.go:172] (0xc000adfa20) (0xc000a62780) Stream removed, broadcasting: 5\n" May 27 00:39:22.403: INFO: stdout: "" May 27 00:39:22.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-664 execpodtlsj7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31596' May 27 00:39:22.626: INFO: stderr: "I0527 00:39:22.535678 2037 log.go:172] (0xc00003a420) (0xc0004aac80) Create stream\nI0527 00:39:22.535769 2037 log.go:172] (0xc00003a420) (0xc0004aac80) Stream added, broadcasting: 1\nI0527 00:39:22.537996 2037 log.go:172] (0xc00003a420) Reply frame received for 1\nI0527 00:39:22.538040 2037 log.go:172] (0xc00003a420) (0xc000398dc0) Create stream\nI0527 00:39:22.538056 2037 log.go:172] (0xc00003a420) (0xc000398dc0) Stream added, broadcasting: 3\nI0527 00:39:22.539052 2037 log.go:172] (0xc00003a420) Reply frame received for 3\nI0527 00:39:22.539087 2037 log.go:172] (0xc00003a420) (0xc0004ab220) Create stream\nI0527 00:39:22.539097 2037 log.go:172] (0xc00003a420) (0xc0004ab220) Stream added, broadcasting: 5\nI0527 00:39:22.539987 2037 log.go:172] (0xc00003a420) Reply frame received for 5\nI0527 00:39:22.619630 2037 log.go:172] (0xc00003a420) Data frame received for 5\nI0527 00:39:22.619666 2037 log.go:172] (0xc0004ab220) (5) Data frame handling\nI0527 00:39:22.619676 2037 log.go:172] (0xc0004ab220) (5) Data frame sent\nI0527 00:39:22.619687 2037 log.go:172] (0xc00003a420) Data frame received for 5\nI0527 00:39:22.619709 2037 log.go:172] (0xc0004ab220) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31596\nConnection to 172.17.0.13 31596 port [tcp/31596] succeeded!\nI0527 00:39:22.619730 2037 log.go:172] (0xc00003a420) Data frame received for 3\nI0527 00:39:22.619746 2037 log.go:172] (0xc000398dc0) (3) Data frame handling\nI0527 00:39:22.621042 2037 log.go:172] (0xc00003a420) Data frame received for 1\nI0527 00:39:22.621074 2037 log.go:172] (0xc0004aac80) (1) Data frame handling\nI0527 00:39:22.621088 2037 log.go:172] (0xc0004aac80) (1) Data frame sent\nI0527 00:39:22.621104 2037 log.go:172] (0xc00003a420) (0xc0004aac80) Stream removed, broadcasting: 1\nI0527 00:39:22.621268 2037 log.go:172] (0xc00003a420) Go away received\nI0527 00:39:22.621647 2037 log.go:172] (0xc00003a420) (0xc0004aac80) Stream removed, broadcasting: 1\nI0527 00:39:22.621670 2037 log.go:172] (0xc00003a420) (0xc000398dc0) Stream removed, broadcasting: 3\nI0527 00:39:22.621679 2037 log.go:172] (0xc00003a420) (0xc0004ab220) Stream removed, broadcasting: 5\n" May 27 00:39:22.626: INFO: stdout: "" May 27 00:39:22.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-664 execpodtlsj7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31596' May 27 00:39:22.886: INFO: stderr: "I0527 00:39:22.810203 2056 log.go:172] (0xc000ae14a0) (0xc0009b86e0) Create stream\nI0527 00:39:22.810258 2056 log.go:172] (0xc000ae14a0) (0xc0009b86e0) Stream added, broadcasting: 1\nI0527 00:39:22.814939 2056 log.go:172] (0xc000ae14a0) Reply frame received for 1\nI0527 00:39:22.814987 2056 log.go:172] (0xc000ae14a0) (0xc000546320) Create stream\nI0527 00:39:22.815001 2056 log.go:172] (0xc000ae14a0) (0xc000546320) Stream added, broadcasting: 3\nI0527 00:39:22.815908 2056 log.go:172] (0xc000ae14a0) Reply frame received for 3\nI0527 00:39:22.815940 2056 log.go:172] (0xc000ae14a0) (0xc0005472c0) Create stream\nI0527 00:39:22.815951 2056 log.go:172] (0xc000ae14a0) (0xc0005472c0) Stream added, broadcasting: 5\nI0527 00:39:22.816848 2056 log.go:172] (0xc000ae14a0) Reply frame received for 5\nI0527 00:39:22.878733 2056 log.go:172] (0xc000ae14a0) Data frame received for 5\nI0527 00:39:22.878763 2056 log.go:172] (0xc0005472c0) (5) Data frame handling\nI0527 00:39:22.878785 2056 log.go:172] (0xc0005472c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31596\nConnection to 172.17.0.12 31596 port [tcp/31596] succeeded!\nI0527 00:39:22.879001 2056 log.go:172] (0xc000ae14a0) Data frame received for 5\nI0527 00:39:22.879029 2056 log.go:172] (0xc0005472c0) (5) Data frame handling\nI0527 00:39:22.879227 2056 log.go:172] (0xc000ae14a0) Data frame received for 3\nI0527 00:39:22.879241 2056 log.go:172] (0xc000546320) (3) Data frame handling\nI0527 00:39:22.881210 2056 log.go:172] (0xc000ae14a0) Data frame received for 1\nI0527 00:39:22.881230 2056 log.go:172] (0xc0009b86e0) (1) Data frame handling\nI0527 00:39:22.881241 2056 log.go:172] (0xc0009b86e0) (1) Data frame sent\nI0527 00:39:22.881250 2056 log.go:172] (0xc000ae14a0) (0xc0009b86e0) Stream removed, broadcasting: 1\nI0527 00:39:22.881497 2056 log.go:172] (0xc000ae14a0) (0xc0009b86e0) Stream removed, broadcasting: 1\nI0527 00:39:22.881520 2056 log.go:172] (0xc000ae14a0) Go away received\nI0527 00:39:22.881581 2056 log.go:172] (0xc000ae14a0) (0xc000546320) Stream removed, broadcasting: 3\nI0527 00:39:22.881601 2056 log.go:172] (0xc000ae14a0) (0xc0005472c0) Stream removed, broadcasting: 5\n" May 27 00:39:22.886: INFO: stdout: "" May 27 00:39:22.886: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:39:22.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-664" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:15.069 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":176,"skipped":3097,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:39:22.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-7c481349-4678-491d-b173-66ea59734c98 STEP: updating the pod May 27 00:39:31.656: INFO: Successfully updated pod "var-expansion-7c481349-4678-491d-b173-66ea59734c98" STEP: waiting for pod and container restart STEP: Failing liveness probe May 27 00:39:31.688: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-9949 PodName:var-expansion-7c481349-4678-491d-b173-66ea59734c98 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:39:31.689: INFO: >>> kubeConfig: /root/.kube/config I0527 00:39:31.719203 8 log.go:172] (0xc00276f600) (0xc0017497c0) Create stream I0527 00:39:31.719227 8 log.go:172] (0xc00276f600) (0xc0017497c0) Stream added, broadcasting: 1 I0527 00:39:31.720809 8 log.go:172] (0xc00276f600) Reply frame received for 1 I0527 00:39:31.720849 8 log.go:172] (0xc00276f600) (0xc001749860) Create stream I0527 00:39:31.720860 8 log.go:172] (0xc00276f600) (0xc001749860) Stream added, broadcasting: 3 I0527 00:39:31.721850 8 log.go:172] (0xc00276f600) Reply frame received for 3 I0527 00:39:31.721876 8 log.go:172] (0xc00276f600) (0xc0012a5f40) Create stream I0527 00:39:31.721885 8 log.go:172] (0xc00276f600) (0xc0012a5f40) Stream added, broadcasting: 5 I0527 00:39:31.722468 8 log.go:172] (0xc00276f600) Reply frame received for 5 I0527 00:39:31.785356 8 log.go:172] (0xc00276f600) Data frame received for 3 I0527 00:39:31.785384 8 log.go:172] (0xc001749860) (3) Data frame handling I0527 00:39:31.785444 8 log.go:172] (0xc00276f600) Data frame received for 5 I0527 00:39:31.785504 8 log.go:172] (0xc0012a5f40) (5) Data frame handling I0527 00:39:31.787250 8 log.go:172] (0xc00276f600) Data frame received for 1 I0527 00:39:31.787266 8 log.go:172] (0xc0017497c0) (1) Data frame handling I0527 00:39:31.787281 8 log.go:172] (0xc0017497c0) (1) Data frame sent I0527 00:39:31.787305 8 log.go:172] (0xc00276f600) (0xc0017497c0) Stream removed, broadcasting: 1 I0527 00:39:31.787404 8 log.go:172] (0xc00276f600) (0xc0017497c0) Stream removed, broadcasting: 1 I0527 00:39:31.787415 8 log.go:172] (0xc00276f600) (0xc001749860) Stream removed, broadcasting: 3 I0527 00:39:31.787510 8 log.go:172] (0xc00276f600) Go away received I0527 00:39:31.787556 8 log.go:172] (0xc00276f600) (0xc0012a5f40) Stream removed, broadcasting: 5 May 27 00:39:31.787: INFO: Pod exec output: / STEP: Waiting for container to restart May 27 00:39:31.791: INFO: Container dapi-container, restarts: 0 May 27 00:39:41.797: INFO: Container dapi-container, restarts: 0 May 27 00:39:51.797: INFO: Container dapi-container, restarts: 0 May 27 00:40:01.796: INFO: Container dapi-container, restarts: 0 May 27 00:40:11.796: INFO: Container dapi-container, restarts: 1 May 27 00:40:11.796: INFO: Container has restart count: 1 STEP: Rewriting the file May 27 00:40:11.796: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-9949 PodName:var-expansion-7c481349-4678-491d-b173-66ea59734c98 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:40:11.796: INFO: >>> kubeConfig: /root/.kube/config I0527 00:40:11.835948 8 log.go:172] (0xc00276fce0) (0xc0029706e0) Create stream I0527 00:40:11.835984 8 log.go:172] (0xc00276fce0) (0xc0029706e0) Stream added, broadcasting: 1 I0527 00:40:11.838975 8 log.go:172] (0xc00276fce0) Reply frame received for 1 I0527 00:40:11.839025 8 log.go:172] (0xc00276fce0) (0xc001f4aaa0) Create stream I0527 00:40:11.839036 8 log.go:172] (0xc00276fce0) (0xc001f4aaa0) Stream added, broadcasting: 3 I0527 00:40:11.840346 8 log.go:172] (0xc00276fce0) Reply frame received for 3 I0527 00:40:11.840406 8 log.go:172] (0xc00276fce0) (0xc00200ebe0) Create stream I0527 00:40:11.840433 8 log.go:172] (0xc00276fce0) (0xc00200ebe0) Stream added, broadcasting: 5 I0527 00:40:11.841834 8 log.go:172] (0xc00276fce0) Reply frame received for 5 I0527 00:40:11.931325 8 log.go:172] (0xc00276fce0) Data frame received for 3 I0527 00:40:11.931374 8 log.go:172] (0xc001f4aaa0) (3) Data frame handling I0527 00:40:11.931415 8 log.go:172] (0xc00276fce0) Data frame received for 5 I0527 00:40:11.931431 8 log.go:172] (0xc00200ebe0) (5) Data frame handling I0527 00:40:11.932817 8 log.go:172] (0xc00276fce0) Data frame received for 1 I0527 00:40:11.932839 8 log.go:172] (0xc0029706e0) (1) Data frame handling I0527 00:40:11.932854 8 log.go:172] (0xc0029706e0) (1) Data frame sent I0527 00:40:11.932869 8 log.go:172] (0xc00276fce0) (0xc0029706e0) Stream removed, broadcasting: 1 I0527 00:40:11.932987 8 log.go:172] (0xc00276fce0) Go away received I0527 00:40:11.933052 8 log.go:172] (0xc00276fce0) (0xc0029706e0) Stream removed, broadcasting: 1 I0527 00:40:11.933086 8 log.go:172] (0xc00276fce0) (0xc001f4aaa0) Stream removed, broadcasting: 3 I0527 00:40:11.933101 8 log.go:172] (0xc00276fce0) (0xc00200ebe0) Stream removed, broadcasting: 5 May 27 00:40:11.933: INFO: Exec stderr: "" May 27 00:40:11.933: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 27 00:40:41.944: INFO: Container has restart count: 2 May 27 00:41:43.942: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 27 00:41:43.946: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-9949 PodName:var-expansion-7c481349-4678-491d-b173-66ea59734c98 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:41:43.946: INFO: >>> kubeConfig: /root/.kube/config I0527 00:41:43.973826 8 log.go:172] (0xc00276efd0) (0xc001f4bb80) Create stream I0527 00:41:43.973867 8 log.go:172] (0xc00276efd0) (0xc001f4bb80) Stream added, broadcasting: 1 I0527 00:41:43.975682 8 log.go:172] (0xc00276efd0) Reply frame received for 1 I0527 00:41:43.975731 8 log.go:172] (0xc00276efd0) (0xc001f4bc20) Create stream I0527 00:41:43.975744 8 log.go:172] (0xc00276efd0) (0xc001f4bc20) Stream added, broadcasting: 3 I0527 00:41:43.976793 8 log.go:172] (0xc00276efd0) Reply frame received for 3 I0527 00:41:43.976842 8 log.go:172] (0xc00276efd0) (0xc001fbe3c0) Create stream I0527 00:41:43.976864 8 log.go:172] (0xc00276efd0) (0xc001fbe3c0) Stream added, broadcasting: 5 I0527 00:41:43.978279 8 log.go:172] (0xc00276efd0) Reply frame received for 5 I0527 00:41:44.027916 8 log.go:172] (0xc00276efd0) Data frame received for 3 I0527 00:41:44.027949 8 log.go:172] (0xc001f4bc20) (3) Data frame handling I0527 00:41:44.027991 8 log.go:172] (0xc00276efd0) Data frame received for 5 I0527 00:41:44.028016 8 log.go:172] (0xc001fbe3c0) (5) Data frame handling I0527 00:41:44.028923 8 log.go:172] (0xc00276efd0) Data frame received for 1 I0527 00:41:44.028947 8 log.go:172] (0xc001f4bb80) (1) Data frame handling I0527 00:41:44.028968 8 log.go:172] (0xc001f4bb80) (1) Data frame sent I0527 00:41:44.028988 8 log.go:172] (0xc00276efd0) (0xc001f4bb80) Stream removed, broadcasting: 1 I0527 00:41:44.029081 8 log.go:172] (0xc00276efd0) (0xc001f4bb80) Stream removed, broadcasting: 1 I0527 00:41:44.029095 8 log.go:172] (0xc00276efd0) (0xc001f4bc20) Stream removed, broadcasting: 3 I0527 00:41:44.029450 8 log.go:172] (0xc00276efd0) (0xc001fbe3c0) Stream removed, broadcasting: 5 I0527 00:41:44.029576 8 log.go:172] (0xc00276efd0) Go away received May 27 00:41:44.033: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-9949 PodName:var-expansion-7c481349-4678-491d-b173-66ea59734c98 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:41:44.033: INFO: >>> kubeConfig: /root/.kube/config I0527 00:41:44.062644 8 log.go:172] (0xc0025740b0) (0xc0029703c0) Create stream I0527 00:41:44.062668 8 log.go:172] (0xc0025740b0) (0xc0029703c0) Stream added, broadcasting: 1 I0527 00:41:44.064519 8 log.go:172] (0xc0025740b0) Reply frame received for 1 I0527 00:41:44.064571 8 log.go:172] (0xc0025740b0) (0xc001f4be00) Create stream I0527 00:41:44.064597 8 log.go:172] (0xc0025740b0) (0xc001f4be00) Stream added, broadcasting: 3 I0527 00:41:44.065750 8 log.go:172] (0xc0025740b0) Reply frame received for 3 I0527 00:41:44.065784 8 log.go:172] (0xc0025740b0) (0xc0029e8000) Create stream I0527 00:41:44.065795 8 log.go:172] (0xc0025740b0) (0xc0029e8000) Stream added, broadcasting: 5 I0527 00:41:44.066493 8 log.go:172] (0xc0025740b0) Reply frame received for 5 I0527 00:41:44.123641 8 log.go:172] (0xc0025740b0) Data frame received for 3 I0527 00:41:44.123686 8 log.go:172] (0xc001f4be00) (3) Data frame handling I0527 00:41:44.123709 8 log.go:172] (0xc0025740b0) Data frame received for 5 I0527 00:41:44.123717 8 log.go:172] (0xc0029e8000) (5) Data frame handling I0527 00:41:44.124518 8 log.go:172] (0xc0025740b0) Data frame received for 1 I0527 00:41:44.124537 8 log.go:172] (0xc0029703c0) (1) Data frame handling I0527 00:41:44.124553 8 log.go:172] (0xc0029703c0) (1) Data frame sent I0527 00:41:44.124566 8 log.go:172] (0xc0025740b0) (0xc0029703c0) Stream removed, broadcasting: 1 I0527 00:41:44.124584 8 log.go:172] (0xc0025740b0) Go away received I0527 00:41:44.124713 8 log.go:172] (0xc0025740b0) (0xc0029703c0) Stream removed, broadcasting: 1 I0527 00:41:44.124754 8 log.go:172] (0xc0025740b0) (0xc001f4be00) Stream removed, broadcasting: 3 I0527 00:41:44.124774 8 log.go:172] (0xc0025740b0) (0xc0029e8000) Stream removed, broadcasting: 5 May 27 00:41:44.124: INFO: Deleting pod "var-expansion-7c481349-4678-491d-b173-66ea59734c98" in namespace "var-expansion-9949" May 27 00:41:44.130: INFO: Wait up to 5m0s for pod "var-expansion-7c481349-4678-491d-b173-66ea59734c98" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:42:26.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9949" for this suite. • [SLOW TEST:183.224 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":177,"skipped":3112,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:42:26.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 27 00:42:30.980: INFO: Successfully updated pod "labelsupdate7735fb9f-f09d-4d1c-877e-ea4d93fab595" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:42:35.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4596" for this suite. • [SLOW TEST:8.883 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":178,"skipped":3125,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:42:35.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 27 00:42:35.135: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:42:43.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7243" for this suite. • [SLOW TEST:8.956 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":179,"skipped":3130,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:42:44.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:42:44.217: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9ee624ea-6979-4adf-8dab-7626f36bf6f7", Controller:(*bool)(0xc0048007f2), BlockOwnerDeletion:(*bool)(0xc0048007f3)}} May 27 00:42:44.273: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e0dea594-7f1f-4491-ab93-b3f5d99c0842", Controller:(*bool)(0xc002696b1a), BlockOwnerDeletion:(*bool)(0xc002696b1b)}} May 27 00:42:44.306: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a5418ac0-8297-4b38-b426-6faeffffb034", Controller:(*bool)(0xc0047688fa), BlockOwnerDeletion:(*bool)(0xc0047688fb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:42:49.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7540" for this suite. • [SLOW TEST:5.415 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":180,"skipped":3134,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:42:49.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f7a2128c-eca8-4568-b9f9-639d7f8fd16d STEP: Creating a pod to test consume secrets May 27 00:42:49.635: INFO: Waiting up to 5m0s for pod "pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403" in namespace "secrets-1594" to be "Succeeded or Failed" May 27 00:42:49.681: INFO: Pod "pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403": Phase="Pending", Reason="", readiness=false. Elapsed: 45.425663ms May 27 00:42:51.685: INFO: Pod "pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049757298s May 27 00:42:53.690: INFO: Pod "pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403": Phase="Running", Reason="", readiness=true. Elapsed: 4.054404025s May 27 00:42:55.694: INFO: Pod "pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058852775s STEP: Saw pod success May 27 00:42:55.694: INFO: Pod "pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403" satisfied condition "Succeeded or Failed" May 27 00:42:55.697: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403 container secret-volume-test: STEP: delete the pod May 27 00:42:55.727: INFO: Waiting for pod pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403 to disappear May 27 00:42:55.738: INFO: Pod pod-secrets-8c1f95b5-0464-468b-abd0-73aaf76be403 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:42:55.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1594" for this suite. • [SLOW TEST:6.308 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":181,"skipped":3151,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:42:55.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 27 00:43:05.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 27 00:43:05.923: INFO: Pod pod-with-poststart-http-hook still exists May 27 00:43:07.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 27 00:43:07.928: INFO: Pod pod-with-poststart-http-hook still exists May 27 00:43:09.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 27 00:43:09.928: INFO: Pod pod-with-poststart-http-hook still exists May 27 00:43:11.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 27 00:43:11.928: INFO: Pod pod-with-poststart-http-hook still exists May 27 00:43:13.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 27 00:43:13.928: INFO: Pod pod-with-poststart-http-hook still exists May 27 00:43:15.923: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 27 00:43:15.927: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:43:15.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1640" for this suite. • [SLOW TEST:20.191 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":3164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:43:15.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 27 00:43:16.515: INFO: created pod pod-service-account-defaultsa May 27 00:43:16.515: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 27 00:43:16.527: INFO: created pod pod-service-account-mountsa May 27 00:43:16.527: INFO: pod pod-service-account-mountsa service account token volume mount: true May 27 00:43:16.584: INFO: created pod pod-service-account-nomountsa May 27 00:43:16.584: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 27 00:43:16.607: INFO: created pod pod-service-account-defaultsa-mountspec May 27 00:43:16.607: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 27 00:43:16.630: INFO: created pod pod-service-account-mountsa-mountspec May 27 00:43:16.630: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 27 00:43:16.747: INFO: created pod pod-service-account-nomountsa-mountspec May 27 00:43:16.747: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 27 00:43:16.762: INFO: created pod pod-service-account-defaultsa-nomountspec May 27 00:43:16.762: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 27 00:43:16.811: INFO: created pod pod-service-account-mountsa-nomountspec May 27 00:43:16.811: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 27 00:43:16.881: INFO: created pod pod-service-account-nomountsa-nomountspec May 27 00:43:16.881: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:43:16.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1336" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":183,"skipped":3214,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:43:17.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-ccb7da1f-5175-44f7-adc6-bd72d125cd77 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:43:17.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4854" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":184,"skipped":3225,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:43:17.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 27 00:43:39.584: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 27 00:43:39.610: INFO: Pod pod-with-prestop-http-hook still exists May 27 00:43:41.611: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 27 00:43:41.616: INFO: Pod pod-with-prestop-http-hook still exists May 27 00:43:43.611: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 27 00:43:43.615: INFO: Pod pod-with-prestop-http-hook still exists May 27 00:43:45.611: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 27 00:43:45.615: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:43:45.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7025" for this suite. • [SLOW TEST:28.439 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":185,"skipped":3225,"failed":0} SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:43:45.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-fc52bec7-e565-4436-a109-37edf75b995d STEP: Creating secret with name s-test-opt-upd-c7f940f3-d0f0-43ab-bcdc-91aa48c42b01 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fc52bec7-e565-4436-a109-37edf75b995d STEP: Updating secret s-test-opt-upd-c7f940f3-d0f0-43ab-bcdc-91aa48c42b01 STEP: Creating secret with name s-test-opt-create-cb137416-c772-407a-8ca6-754d3dd49c8d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:43:53.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5081" for this suite. • [SLOW TEST:8.260 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":186,"skipped":3227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:43:53.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:43:54.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3844" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":187,"skipped":3265,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:43:54.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-4c7d8318-172e-410a-b753-9a842535e658 STEP: Creating a pod to test consume configMaps May 27 00:43:54.291: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af7356ff-b478-4989-a9b8-0c46a46665af" in namespace "projected-540" to be "Succeeded or Failed" May 27 00:43:54.324: INFO: Pod "pod-projected-configmaps-af7356ff-b478-4989-a9b8-0c46a46665af": Phase="Pending", Reason="", readiness=false. Elapsed: 32.68223ms May 27 00:43:56.333: INFO: Pod "pod-projected-configmaps-af7356ff-b478-4989-a9b8-0c46a46665af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041522984s May 27 00:43:58.338: INFO: Pod "pod-projected-configmaps-af7356ff-b478-4989-a9b8-0c46a46665af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046951089s STEP: Saw pod success May 27 00:43:58.338: INFO: Pod "pod-projected-configmaps-af7356ff-b478-4989-a9b8-0c46a46665af" satisfied condition "Succeeded or Failed" May 27 00:43:58.342: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-af7356ff-b478-4989-a9b8-0c46a46665af container projected-configmap-volume-test: STEP: delete the pod May 27 00:43:58.362: INFO: Waiting for pod pod-projected-configmaps-af7356ff-b478-4989-a9b8-0c46a46665af to disappear May 27 00:43:58.390: INFO: Pod pod-projected-configmaps-af7356ff-b478-4989-a9b8-0c46a46665af no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:43:58.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-540" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":3272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:43:58.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 27 00:43:58.487: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-a 4f3c9f12-4749-41c5-b39b-e8ecaf9f8623 7954643 0 2020-05-27 00:43:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-27 00:43:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:43:58.487: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-a 4f3c9f12-4749-41c5-b39b-e8ecaf9f8623 7954643 0 2020-05-27 00:43:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-27 00:43:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 27 00:44:08.496: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-a 4f3c9f12-4749-41c5-b39b-e8ecaf9f8623 7954708 0 2020-05-27 00:43:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:44:08.496: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-a 4f3c9f12-4749-41c5-b39b-e8ecaf9f8623 7954708 0 2020-05-27 00:43:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 27 00:44:18.506: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-a 4f3c9f12-4749-41c5-b39b-e8ecaf9f8623 7954736 0 2020-05-27 00:43:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:44:18.507: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-a 4f3c9f12-4749-41c5-b39b-e8ecaf9f8623 7954736 0 2020-05-27 00:43:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 27 00:44:28.515: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-a 4f3c9f12-4749-41c5-b39b-e8ecaf9f8623 7954768 0 2020-05-27 00:43:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:44:28.515: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-a 4f3c9f12-4749-41c5-b39b-e8ecaf9f8623 7954768 0 2020-05-27 00:43:58 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 27 00:44:38.522: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-b 823f5edd-0b06-46ef-8944-93da225528de 7954798 0 2020-05-27 00:44:38 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:44:38.522: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-b 823f5edd-0b06-46ef-8944-93da225528de 7954798 0 2020-05-27 00:44:38 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 27 00:44:48.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-b 823f5edd-0b06-46ef-8944-93da225528de 7954828 0 2020-05-27 00:44:38 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 27 00:44:48.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3461 /api/v1/namespaces/watch-3461/configmaps/e2e-watch-test-configmap-b 823f5edd-0b06-46ef-8944-93da225528de 7954828 0 2020-05-27 00:44:38 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-27 00:44:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:44:58.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3461" for this suite. • [SLOW TEST:60.147 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":189,"skipped":3306,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:44:58.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 27 00:44:59.369: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 27 00:45:01.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137099, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137099, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137099, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137099, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:45:03.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137099, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137099, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137099, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137099, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:45:06.430: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:45:06.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:45:07.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9438" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.147 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":190,"skipped":3315,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:45:07.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:45:07.850: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 27 00:45:10.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5810 create -f -' May 27 00:45:14.142: INFO: stderr: "" May 27 00:45:14.142: INFO: stdout: "e2e-test-crd-publish-openapi-4645-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 27 00:45:14.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5810 delete e2e-test-crd-publish-openapi-4645-crds test-cr' May 27 00:45:14.266: INFO: stderr: "" May 27 00:45:14.266: INFO: stdout: "e2e-test-crd-publish-openapi-4645-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 27 00:45:14.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5810 apply -f -' May 27 00:45:14.552: INFO: stderr: "" May 27 00:45:14.552: INFO: stdout: "e2e-test-crd-publish-openapi-4645-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 27 00:45:14.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5810 delete e2e-test-crd-publish-openapi-4645-crds test-cr' May 27 00:45:14.668: INFO: stderr: "" May 27 00:45:14.668: INFO: stdout: "e2e-test-crd-publish-openapi-4645-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 27 00:45:14.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4645-crds' May 27 00:45:14.953: INFO: stderr: "" May 27 00:45:14.953: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4645-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:45:16.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5810" for this suite. • [SLOW TEST:9.206 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":191,"skipped":3321,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:45:16.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:45:53.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2119" for this suite. • [SLOW TEST:36.684 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":192,"skipped":3337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:45:53.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:45:53.701: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 27 00:45:58.717: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 27 00:45:58.717: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 27 00:46:00.722: INFO: Creating deployment "test-rollover-deployment" May 27 00:46:00.735: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 27 00:46:02.742: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 27 00:46:02.748: INFO: Ensure that both replica sets have 1 created replica May 27 00:46:02.755: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 27 00:46:02.762: INFO: Updating deployment test-rollover-deployment May 27 00:46:02.762: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 27 00:46:04.825: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 27 00:46:04.834: INFO: Make sure deployment "test-rollover-deployment" is complete May 27 00:46:04.839: INFO: all replica sets need to contain the pod-template-hash label May 27 00:46:04.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137163, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:46:06.848: INFO: all replica sets need to contain the pod-template-hash label May 27 00:46:06.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137165, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:46:08.848: INFO: all replica sets need to contain the pod-template-hash label May 27 00:46:08.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137165, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:46:10.846: INFO: all replica sets need to contain the pod-template-hash label May 27 00:46:10.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137165, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:46:12.859: INFO: all replica sets need to contain the pod-template-hash label May 27 00:46:12.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137165, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:46:14.848: INFO: all replica sets need to contain the pod-template-hash label May 27 00:46:14.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137165, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137160, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 00:46:16.848: INFO: May 27 00:46:16.848: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 27 00:46:16.856: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-676 /apis/apps/v1/namespaces/deployment-676/deployments/test-rollover-deployment 6571d3ba-2917-4584-b791-173a9e28abd1 7955318 2 2020-05-27 00:46:00 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-27 00:46:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-27 00:46:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00389ea08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-27 00:46:00 +0000 UTC,LastTransitionTime:2020-05-27 00:46:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-27 00:46:16 +0000 UTC,LastTransitionTime:2020-05-27 00:46:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 27 00:46:16.859: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-676 /apis/apps/v1/namespaces/deployment-676/replicasets/test-rollover-deployment-7c4fd9c879 b1eca3c8-bda4-4a9f-88be-e4646b171c3e 7955306 2 2020-05-27 00:46:02 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 6571d3ba-2917-4584-b791-173a9e28abd1 0xc00390ba17 0xc00390ba18}] [] [{kube-controller-manager Update apps/v1 2020-05-27 00:46:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6571d3ba-2917-4584-b791-173a9e28abd1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00390bac8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 27 00:46:16.859: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 27 00:46:16.859: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-676 /apis/apps/v1/namespaces/deployment-676/replicasets/test-rollover-controller 7c83a0ed-d985-440e-a286-8ec7d668dbb5 7955317 2 2020-05-27 00:45:53 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 6571d3ba-2917-4584-b791-173a9e28abd1 0xc00390b7c7 0xc00390b7c8}] [] [{e2e.test Update apps/v1 2020-05-27 00:45:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-27 00:46:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6571d3ba-2917-4584-b791-173a9e28abd1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00390b888 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 27 00:46:16.859: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-676 /apis/apps/v1/namespaces/deployment-676/replicasets/test-rollover-deployment-5686c4cfd5 2ab47e0a-9378-4c10-9b6c-8a77910cb3d8 7955260 2 2020-05-27 00:46:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 6571d3ba-2917-4584-b791-173a9e28abd1 0xc00390b907 0xc00390b908}] [] [{kube-controller-manager Update apps/v1 2020-05-27 00:46:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6571d3ba-2917-4584-b791-173a9e28abd1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00390b998 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 27 00:46:16.863: INFO: Pod "test-rollover-deployment-7c4fd9c879-tjp9s" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-tjp9s test-rollover-deployment-7c4fd9c879- deployment-676 /api/v1/namespaces/deployment-676/pods/test-rollover-deployment-7c4fd9c879-tjp9s 4c246306-1b15-4dcc-9cf8-9e29bbb284d1 7955278 0 2020-05-27 00:46:02 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 b1eca3c8-bda4-4a9f-88be-e4646b171c3e 0xc00389ef77 0xc00389ef78}] [] [{kube-controller-manager Update v1 2020-05-27 00:46:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1eca3c8-bda4-4a9f-88be-e4646b171c3e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-27 00:46:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.176\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-js2nz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-js2nz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-js2nz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 00:46:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 00:46:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 00:46:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 00:46:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.176,StartTime:2020-05-27 00:46:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-27 00:46:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://e60ce82356d6a2d3168f287b03511758faca8cdfeb1fe4f14af12467cca590ea,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:46:16.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-676" for this suite. • [SLOW TEST:23.287 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":193,"skipped":3367,"failed":0} S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:46:16.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:46:17.146: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:46:21.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8792" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":194,"skipped":3368,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:46:21.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:46:21.307: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 27 00:46:24.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5512 create -f -' May 27 00:46:27.397: INFO: stderr: "" May 27 00:46:27.397: INFO: stdout: "e2e-test-crd-publish-openapi-3575-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 27 00:46:27.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5512 delete e2e-test-crd-publish-openapi-3575-crds test-cr' May 27 00:46:27.523: INFO: stderr: "" May 27 00:46:27.523: INFO: stdout: "e2e-test-crd-publish-openapi-3575-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 27 00:46:27.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5512 apply -f -' May 27 00:46:27.813: INFO: stderr: "" May 27 00:46:27.813: INFO: stdout: "e2e-test-crd-publish-openapi-3575-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 27 00:46:27.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5512 delete e2e-test-crd-publish-openapi-3575-crds test-cr' May 27 00:46:27.913: INFO: stderr: "" May 27 00:46:27.913: INFO: stdout: "e2e-test-crd-publish-openapi-3575-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 27 00:46:27.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3575-crds' May 27 00:46:28.196: INFO: stderr: "" May 27 00:46:28.196: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3575-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:46:30.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5512" for this suite. • [SLOW TEST:8.924 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":195,"skipped":3371,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:46:30.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-q8m7n in namespace proxy-3299 I0527 00:46:30.312744 8 runners.go:190] Created replication controller with name: proxy-service-q8m7n, namespace: proxy-3299, replica count: 1 I0527 00:46:31.363187 8 runners.go:190] proxy-service-q8m7n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:46:32.363404 8 runners.go:190] proxy-service-q8m7n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:46:33.363639 8 runners.go:190] proxy-service-q8m7n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0527 00:46:34.363911 8 runners.go:190] proxy-service-q8m7n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0527 00:46:35.364188 8 runners.go:190] proxy-service-q8m7n Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 00:46:35.367: INFO: setup took 5.18551582s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 27 00:46:35.376: INFO: (0) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 8.05971ms) May 27 00:46:35.376: INFO: (0) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 7.967081ms) May 27 00:46:35.376: INFO: (0) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 8.090086ms) May 27 00:46:35.376: INFO: (0) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 8.26559ms) May 27 00:46:35.376: INFO: (0) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 8.556152ms) May 27 00:46:35.377: INFO: (0) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 9.083019ms) May 27 00:46:35.377: INFO: (0) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 9.365433ms) May 27 00:46:35.378: INFO: (0) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 10.711711ms) May 27 00:46:35.379: INFO: (0) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 11.490414ms) May 27 00:46:35.379: INFO: (0) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 11.701776ms) May 27 00:46:35.385: INFO: (0) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 17.771258ms) May 27 00:46:35.385: INFO: (0) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 17.848169ms) May 27 00:46:35.386: INFO: (0) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 18.086952ms) May 27 00:46:35.386: INFO: (0) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 18.139656ms) May 27 00:46:35.390: INFO: (0) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 22.028952ms) May 27 00:46:35.390: INFO: (0) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test<... (200; 28.580875ms) May 27 00:46:35.419: INFO: (1) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 28.556845ms) May 27 00:46:35.419: INFO: (1) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 28.619494ms) May 27 00:46:35.419: INFO: (1) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 28.785171ms) May 27 00:46:35.420: INFO: (1) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 28.981741ms) May 27 00:46:35.420: INFO: (1) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 28.983436ms) May 27 00:46:35.421: INFO: (1) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 30.745268ms) May 27 00:46:35.422: INFO: (1) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 30.870317ms) May 27 00:46:35.422: INFO: (1) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 30.91735ms) May 27 00:46:35.422: INFO: (1) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 30.984738ms) May 27 00:46:35.422: INFO: (1) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 30.974534ms) May 27 00:46:35.422: INFO: (1) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: ... (200; 3.707266ms) May 27 00:46:35.426: INFO: (2) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 4.199793ms) May 27 00:46:35.429: INFO: (2) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 7.512869ms) May 27 00:46:35.429: INFO: (2) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 7.461821ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 7.548421ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 7.531549ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 7.608758ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 7.765613ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 7.811629ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 8.123175ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 8.094188ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 8.264009ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 8.257418ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 8.315385ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 8.304384ms) May 27 00:46:35.430: INFO: (2) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test (200; 5.141241ms) May 27 00:46:35.436: INFO: (3) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 5.148378ms) May 27 00:46:35.436: INFO: (3) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 5.47472ms) May 27 00:46:35.436: INFO: (3) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 5.516137ms) May 27 00:46:35.436: INFO: (3) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 5.579754ms) May 27 00:46:35.436: INFO: (3) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 5.544978ms) May 27 00:46:35.436: INFO: (3) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: ... (200; 5.618505ms) May 27 00:46:35.437: INFO: (3) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 6.401968ms) May 27 00:46:35.437: INFO: (3) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 6.628485ms) May 27 00:46:35.437: INFO: (3) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 6.817019ms) May 27 00:46:35.437: INFO: (3) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 6.849027ms) May 27 00:46:35.437: INFO: (3) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 6.906473ms) May 27 00:46:35.441: INFO: (4) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 3.575213ms) May 27 00:46:35.441: INFO: (4) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 3.902036ms) May 27 00:46:35.442: INFO: (4) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 3.887965ms) May 27 00:46:35.441: INFO: (4) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 3.909126ms) May 27 00:46:35.442: INFO: (4) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 3.95755ms) May 27 00:46:35.442: INFO: (4) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 3.928858ms) May 27 00:46:35.442: INFO: (4) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.764515ms) May 27 00:46:35.442: INFO: (4) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.710682ms) May 27 00:46:35.442: INFO: (4) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 4.72989ms) May 27 00:46:35.442: INFO: (4) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 4.804451ms) May 27 00:46:35.443: INFO: (4) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 5.160638ms) May 27 00:46:35.443: INFO: (4) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 5.306524ms) May 27 00:46:35.443: INFO: (4) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 5.274246ms) May 27 00:46:35.443: INFO: (4) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 5.345251ms) May 27 00:46:35.443: INFO: (4) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 5.259902ms) May 27 00:46:35.443: INFO: (4) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test (200; 3.507065ms) May 27 00:46:35.447: INFO: (5) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 3.567054ms) May 27 00:46:35.447: INFO: (5) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 3.901402ms) May 27 00:46:35.447: INFO: (5) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 4.020562ms) May 27 00:46:35.447: INFO: (5) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test<... (200; 5.455848ms) May 27 00:46:35.448: INFO: (5) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 5.427362ms) May 27 00:46:35.448: INFO: (5) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 5.445049ms) May 27 00:46:35.452: INFO: (6) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 3.332904ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.767062ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.754753ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 5.015639ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 4.793917ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 5.050663ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 4.78876ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 4.79899ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 4.972011ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 4.996664ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 5.091038ms) May 27 00:46:35.454: INFO: (6) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: ... (200; 6.994494ms) May 27 00:46:35.462: INFO: (7) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 7.096659ms) May 27 00:46:35.462: INFO: (7) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 7.284256ms) May 27 00:46:35.462: INFO: (7) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test (200; 7.560825ms) May 27 00:46:35.462: INFO: (7) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 7.516035ms) May 27 00:46:35.462: INFO: (7) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 7.559739ms) May 27 00:46:35.465: INFO: (8) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 2.749125ms) May 27 00:46:35.465: INFO: (8) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 3.010513ms) May 27 00:46:35.467: INFO: (8) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 4.877511ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test (200; 5.512129ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 5.508148ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 5.555769ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 5.591501ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 5.621382ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 5.606873ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 5.744188ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 5.815303ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 5.879238ms) May 27 00:46:35.468: INFO: (8) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 6.012451ms) May 27 00:46:35.471: INFO: (9) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 2.53291ms) May 27 00:46:35.474: INFO: (9) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 5.002925ms) May 27 00:46:35.474: INFO: (9) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 4.683518ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 5.71841ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 6.164577ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 6.215389ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 6.156404ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 6.145762ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 6.200428ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 6.286961ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 6.322375ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 6.177535ms) May 27 00:46:35.475: INFO: (9) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: ... (200; 3.821578ms) May 27 00:46:35.479: INFO: (10) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test (200; 3.985798ms) May 27 00:46:35.479: INFO: (10) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 3.980476ms) May 27 00:46:35.479: INFO: (10) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.048148ms) May 27 00:46:35.480: INFO: (10) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 4.370003ms) May 27 00:46:35.480: INFO: (10) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.455281ms) May 27 00:46:35.480: INFO: (10) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 4.683769ms) May 27 00:46:35.480: INFO: (10) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 4.725645ms) May 27 00:46:35.480: INFO: (10) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 5.082229ms) May 27 00:46:35.480: INFO: (10) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 5.150991ms) May 27 00:46:35.481: INFO: (10) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 5.145409ms) May 27 00:46:35.481: INFO: (10) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 5.29648ms) May 27 00:46:35.481: INFO: (10) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 5.327545ms) May 27 00:46:35.481: INFO: (10) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 5.377185ms) May 27 00:46:35.485: INFO: (11) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 4.019808ms) May 27 00:46:35.485: INFO: (11) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 3.99166ms) May 27 00:46:35.485: INFO: (11) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 4.168946ms) May 27 00:46:35.485: INFO: (11) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 4.217295ms) May 27 00:46:35.485: INFO: (11) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.25288ms) May 27 00:46:35.485: INFO: (11) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.292233ms) May 27 00:46:35.487: INFO: (11) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 5.890332ms) May 27 00:46:35.487: INFO: (11) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 6.180234ms) May 27 00:46:35.487: INFO: (11) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 6.250793ms) May 27 00:46:35.487: INFO: (11) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 6.302418ms) May 27 00:46:35.487: INFO: (11) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: ... (200; 6.23843ms) May 27 00:46:35.494: INFO: (12) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 6.316757ms) May 27 00:46:35.494: INFO: (12) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test<... (200; 6.370429ms) May 27 00:46:35.495: INFO: (12) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 6.788951ms) May 27 00:46:35.495: INFO: (12) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 6.994869ms) May 27 00:46:35.495: INFO: (12) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 7.034369ms) May 27 00:46:35.495: INFO: (12) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 7.041784ms) May 27 00:46:35.495: INFO: (12) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 7.141577ms) May 27 00:46:35.495: INFO: (12) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 7.13001ms) May 27 00:46:35.495: INFO: (12) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 7.142695ms) May 27 00:46:35.495: INFO: (12) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 7.306034ms) May 27 00:46:35.504: INFO: (13) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 8.495401ms) May 27 00:46:35.504: INFO: (13) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 8.508589ms) May 27 00:46:35.504: INFO: (13) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 8.766809ms) May 27 00:46:35.504: INFO: (13) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 8.789064ms) May 27 00:46:35.504: INFO: (13) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 8.824787ms) May 27 00:46:35.504: INFO: (13) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 8.780947ms) May 27 00:46:35.504: INFO: (13) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 8.870191ms) May 27 00:46:35.505: INFO: (13) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 9.053446ms) May 27 00:46:35.505: INFO: (13) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: ... (200; 9.95868ms) May 27 00:46:35.506: INFO: (13) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 10.798207ms) May 27 00:46:35.506: INFO: (13) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 10.876625ms) May 27 00:46:35.506: INFO: (13) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 10.864089ms) May 27 00:46:35.506: INFO: (13) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 10.911833ms) May 27 00:46:35.506: INFO: (13) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 10.870714ms) May 27 00:46:35.507: INFO: (13) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 11.018835ms) May 27 00:46:35.511: INFO: (14) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: ... (200; 4.481056ms) May 27 00:46:35.511: INFO: (14) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 4.459677ms) May 27 00:46:35.511: INFO: (14) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 4.483272ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 4.879954ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 4.978459ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 5.125201ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 5.226442ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 5.262496ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 5.197601ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 5.521804ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 5.598106ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 5.599032ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 5.624213ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 5.605502ms) May 27 00:46:35.512: INFO: (14) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 5.689038ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 3.239795ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 3.568309ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 3.558101ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 3.699808ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 3.870213ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 3.867739ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test<... (200; 3.998737ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 3.927623ms) May 27 00:46:35.516: INFO: (15) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 3.973575ms) May 27 00:46:35.517: INFO: (15) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 4.344295ms) May 27 00:46:35.517: INFO: (15) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 4.582471ms) May 27 00:46:35.517: INFO: (15) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 4.645614ms) May 27 00:46:35.517: INFO: (15) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 4.742174ms) May 27 00:46:35.517: INFO: (15) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 4.593135ms) May 27 00:46:35.517: INFO: (15) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 4.828784ms) May 27 00:46:35.520: INFO: (16) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test<... (200; 3.935844ms) May 27 00:46:35.521: INFO: (16) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 4.045785ms) May 27 00:46:35.521: INFO: (16) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.005401ms) May 27 00:46:35.522: INFO: (16) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:460/proxy/: tls baz (200; 4.111495ms) May 27 00:46:35.522: INFO: (16) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.498109ms) May 27 00:46:35.522: INFO: (16) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 4.455752ms) May 27 00:46:35.522: INFO: (16) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 4.482629ms) May 27 00:46:35.522: INFO: (16) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 4.557038ms) May 27 00:46:35.522: INFO: (16) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 4.640396ms) May 27 00:46:35.522: INFO: (16) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 4.632751ms) May 27 00:46:35.522: INFO: (16) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 4.778054ms) May 27 00:46:35.524: INFO: (16) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 6.368222ms) May 27 00:46:35.524: INFO: (16) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 6.36356ms) May 27 00:46:35.524: INFO: (16) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 6.378152ms) May 27 00:46:35.524: INFO: (16) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 6.445803ms) May 27 00:46:35.530: INFO: (17) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 5.561449ms) May 27 00:46:35.530: INFO: (17) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 6.017828ms) May 27 00:46:35.530: INFO: (17) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test<... (200; 6.085771ms) May 27 00:46:35.530: INFO: (17) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm/proxy/: test (200; 6.115891ms) May 27 00:46:35.531: INFO: (17) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 6.727467ms) May 27 00:46:35.531: INFO: (17) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 6.75649ms) May 27 00:46:35.531: INFO: (17) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 6.80959ms) May 27 00:46:35.531: INFO: (17) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 6.885022ms) May 27 00:46:35.531: INFO: (17) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 6.896285ms) May 27 00:46:35.531: INFO: (17) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 6.876458ms) May 27 00:46:35.535: INFO: (18) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 3.551326ms) May 27 00:46:35.535: INFO: (18) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.320898ms) May 27 00:46:35.535: INFO: (18) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test (200; 4.464548ms) May 27 00:46:35.535: INFO: (18) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 4.411532ms) May 27 00:46:35.535: INFO: (18) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 4.446302ms) May 27 00:46:35.535: INFO: (18) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.465151ms) May 27 00:46:35.535: INFO: (18) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname1/proxy/: foo (200; 4.42021ms) May 27 00:46:35.535: INFO: (18) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 4.435045ms) May 27 00:46:35.536: INFO: (18) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname1/proxy/: foo (200; 4.576323ms) May 27 00:46:35.536: INFO: (18) /api/v1/namespaces/proxy-3299/services/http:proxy-service-q8m7n:portname2/proxy/: bar (200; 5.166829ms) May 27 00:46:35.537: INFO: (18) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname2/proxy/: tls qux (200; 5.606576ms) May 27 00:46:35.537: INFO: (18) /api/v1/namespaces/proxy-3299/services/https:proxy-service-q8m7n:tlsportname1/proxy/: tls baz (200; 6.015085ms) May 27 00:46:35.542: INFO: (19) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:1080/proxy/: test<... (200; 4.487597ms) May 27 00:46:35.542: INFO: (19) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:462/proxy/: tls qux (200; 4.555647ms) May 27 00:46:35.542: INFO: (19) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 4.951869ms) May 27 00:46:35.542: INFO: (19) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 5.036882ms) May 27 00:46:35.542: INFO: (19) /api/v1/namespaces/proxy-3299/pods/https:proxy-service-q8m7n-8ltsm:443/proxy/: test (200; 5.475844ms) May 27 00:46:35.543: INFO: (19) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:1080/proxy/: ... (200; 5.561623ms) May 27 00:46:35.543: INFO: (19) /api/v1/namespaces/proxy-3299/pods/http:proxy-service-q8m7n-8ltsm:160/proxy/: foo (200; 5.601358ms) May 27 00:46:35.543: INFO: (19) /api/v1/namespaces/proxy-3299/pods/proxy-service-q8m7n-8ltsm:162/proxy/: bar (200; 5.500207ms) May 27 00:46:35.543: INFO: (19) /api/v1/namespaces/proxy-3299/services/proxy-service-q8m7n:portname2/proxy/: bar (200; 5.679046ms) STEP: deleting ReplicationController proxy-service-q8m7n in namespace proxy-3299, will wait for the garbage collector to delete the pods May 27 00:46:35.602: INFO: Deleting ReplicationController proxy-service-q8m7n took: 7.177907ms May 27 00:46:35.902: INFO: Terminating ReplicationController proxy-service-q8m7n pods took: 300.280251ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:46:45.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3299" for this suite. • [SLOW TEST:14.905 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":196,"skipped":3385,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:46:45.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 27 00:46:45.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 27 00:46:45.343: INFO: stderr: "" May 27 00:46:45.343: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:46:45.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7616" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":197,"skipped":3402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:46:45.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 27 00:46:45.422: INFO: Waiting up to 5m0s for pod "var-expansion-dad536f3-feae-4fa6-b768-c1b0d03d5e39" in namespace "var-expansion-7035" to be "Succeeded or Failed" May 27 00:46:45.448: INFO: Pod "var-expansion-dad536f3-feae-4fa6-b768-c1b0d03d5e39": Phase="Pending", Reason="", readiness=false. Elapsed: 25.887589ms May 27 00:46:47.455: INFO: Pod "var-expansion-dad536f3-feae-4fa6-b768-c1b0d03d5e39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033065077s May 27 00:46:49.467: INFO: Pod "var-expansion-dad536f3-feae-4fa6-b768-c1b0d03d5e39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045230267s STEP: Saw pod success May 27 00:46:49.467: INFO: Pod "var-expansion-dad536f3-feae-4fa6-b768-c1b0d03d5e39" satisfied condition "Succeeded or Failed" May 27 00:46:49.470: INFO: Trying to get logs from node latest-worker pod var-expansion-dad536f3-feae-4fa6-b768-c1b0d03d5e39 container dapi-container: STEP: delete the pod May 27 00:46:49.568: INFO: Waiting for pod var-expansion-dad536f3-feae-4fa6-b768-c1b0d03d5e39 to disappear May 27 00:46:49.581: INFO: Pod var-expansion-dad536f3-feae-4fa6-b768-c1b0d03d5e39 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:46:49.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7035" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":198,"skipped":3433,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:46:49.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 27 00:46:49.673: INFO: Waiting up to 5m0s for pod "pod-d4314417-4261-4ad2-b053-a9e6f2fa37b0" in namespace "emptydir-8188" to be "Succeeded or Failed" May 27 00:46:49.682: INFO: Pod "pod-d4314417-4261-4ad2-b053-a9e6f2fa37b0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.223116ms May 27 00:46:51.699: INFO: Pod "pod-d4314417-4261-4ad2-b053-a9e6f2fa37b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026449577s May 27 00:46:53.703: INFO: Pod "pod-d4314417-4261-4ad2-b053-a9e6f2fa37b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03037884s STEP: Saw pod success May 27 00:46:53.703: INFO: Pod "pod-d4314417-4261-4ad2-b053-a9e6f2fa37b0" satisfied condition "Succeeded or Failed" May 27 00:46:53.707: INFO: Trying to get logs from node latest-worker pod pod-d4314417-4261-4ad2-b053-a9e6f2fa37b0 container test-container: STEP: delete the pod May 27 00:46:53.727: INFO: Waiting for pod pod-d4314417-4261-4ad2-b053-a9e6f2fa37b0 to disappear May 27 00:46:53.777: INFO: Pod pod-d4314417-4261-4ad2-b053-a9e6f2fa37b0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:46:53.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8188" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":199,"skipped":3438,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:46:53.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4554 STEP: creating replication controller nodeport-test in namespace services-4554 I0527 00:46:53.911264 8 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4554, replica count: 2 I0527 00:46:56.961815 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:46:59.962097 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 00:46:59.962: INFO: Creating new exec pod May 27 00:47:05.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4554 execpodx999z -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 27 00:47:05.786: INFO: stderr: "I0527 00:47:05.679424 2327 log.go:172] (0xc0000e0bb0) (0xc00015c820) Create stream\nI0527 00:47:05.679509 2327 log.go:172] (0xc0000e0bb0) (0xc00015c820) Stream added, broadcasting: 1\nI0527 00:47:05.685935 2327 log.go:172] (0xc0000e0bb0) Reply frame received for 1\nI0527 00:47:05.686010 2327 log.go:172] (0xc0000e0bb0) (0xc00015d9a0) Create stream\nI0527 00:47:05.686033 2327 log.go:172] (0xc0000e0bb0) (0xc00015d9a0) Stream added, broadcasting: 3\nI0527 00:47:05.687572 2327 log.go:172] (0xc0000e0bb0) Reply frame received for 3\nI0527 00:47:05.687607 2327 log.go:172] (0xc0000e0bb0) (0xc000610b40) Create stream\nI0527 00:47:05.687624 2327 log.go:172] (0xc0000e0bb0) (0xc000610b40) Stream added, broadcasting: 5\nI0527 00:47:05.689427 2327 log.go:172] (0xc0000e0bb0) Reply frame received for 5\nI0527 00:47:05.758018 2327 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0527 00:47:05.758041 2327 log.go:172] (0xc000610b40) (5) Data frame handling\nI0527 00:47:05.758052 2327 log.go:172] (0xc000610b40) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0527 00:47:05.777070 2327 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0527 00:47:05.777472 2327 log.go:172] (0xc000610b40) (5) Data frame handling\nI0527 00:47:05.777548 2327 log.go:172] (0xc000610b40) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0527 00:47:05.777752 2327 log.go:172] (0xc0000e0bb0) Data frame received for 3\nI0527 00:47:05.777832 2327 log.go:172] (0xc00015d9a0) (3) Data frame handling\nI0527 00:47:05.778210 2327 log.go:172] (0xc0000e0bb0) Data frame received for 5\nI0527 00:47:05.778231 2327 log.go:172] (0xc000610b40) (5) Data frame handling\nI0527 00:47:05.779738 2327 log.go:172] (0xc0000e0bb0) Data frame received for 1\nI0527 00:47:05.779752 2327 log.go:172] (0xc00015c820) (1) Data frame handling\nI0527 00:47:05.779765 2327 log.go:172] (0xc00015c820) (1) Data frame sent\nI0527 00:47:05.779862 2327 log.go:172] (0xc0000e0bb0) (0xc00015c820) Stream removed, broadcasting: 1\nI0527 00:47:05.779931 2327 log.go:172] (0xc0000e0bb0) Go away received\nI0527 00:47:05.780253 2327 log.go:172] (0xc0000e0bb0) (0xc00015c820) Stream removed, broadcasting: 1\nI0527 00:47:05.780281 2327 log.go:172] (0xc0000e0bb0) (0xc00015d9a0) Stream removed, broadcasting: 3\nI0527 00:47:05.780300 2327 log.go:172] (0xc0000e0bb0) (0xc000610b40) Stream removed, broadcasting: 5\n" May 27 00:47:05.786: INFO: stdout: "" May 27 00:47:05.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4554 execpodx999z -- /bin/sh -x -c nc -zv -t -w 2 10.102.200.169 80' May 27 00:47:05.989: INFO: stderr: "I0527 00:47:05.901466 2349 log.go:172] (0xc0000e7a20) (0xc000c30140) Create stream\nI0527 00:47:05.901516 2349 log.go:172] (0xc0000e7a20) (0xc000c30140) Stream added, broadcasting: 1\nI0527 00:47:05.906281 2349 log.go:172] (0xc0000e7a20) Reply frame received for 1\nI0527 00:47:05.906333 2349 log.go:172] (0xc0000e7a20) (0xc000758fa0) Create stream\nI0527 00:47:05.906351 2349 log.go:172] (0xc0000e7a20) (0xc000758fa0) Stream added, broadcasting: 3\nI0527 00:47:05.907317 2349 log.go:172] (0xc0000e7a20) Reply frame received for 3\nI0527 00:47:05.907372 2349 log.go:172] (0xc0000e7a20) (0xc00070ab40) Create stream\nI0527 00:47:05.907395 2349 log.go:172] (0xc0000e7a20) (0xc00070ab40) Stream added, broadcasting: 5\nI0527 00:47:05.908393 2349 log.go:172] (0xc0000e7a20) Reply frame received for 5\nI0527 00:47:05.982179 2349 log.go:172] (0xc0000e7a20) Data frame received for 5\nI0527 00:47:05.982217 2349 log.go:172] (0xc0000e7a20) Data frame received for 3\nI0527 00:47:05.982252 2349 log.go:172] (0xc000758fa0) (3) Data frame handling\nI0527 00:47:05.982281 2349 log.go:172] (0xc00070ab40) (5) Data frame handling\nI0527 00:47:05.982310 2349 log.go:172] (0xc00070ab40) (5) Data frame sent\nI0527 00:47:05.982323 2349 log.go:172] (0xc0000e7a20) Data frame received for 5\nI0527 00:47:05.982331 2349 log.go:172] (0xc00070ab40) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.200.169 80\nConnection to 10.102.200.169 80 port [tcp/http] succeeded!\nI0527 00:47:05.983837 2349 log.go:172] (0xc0000e7a20) Data frame received for 1\nI0527 00:47:05.983851 2349 log.go:172] (0xc000c30140) (1) Data frame handling\nI0527 00:47:05.983859 2349 log.go:172] (0xc000c30140) (1) Data frame sent\nI0527 00:47:05.983869 2349 log.go:172] (0xc0000e7a20) (0xc000c30140) Stream removed, broadcasting: 1\nI0527 00:47:05.983878 2349 log.go:172] (0xc0000e7a20) Go away received\nI0527 00:47:05.984319 2349 log.go:172] (0xc0000e7a20) (0xc000c30140) Stream removed, broadcasting: 1\nI0527 00:47:05.984367 2349 log.go:172] (0xc0000e7a20) (0xc000758fa0) Stream removed, broadcasting: 3\nI0527 00:47:05.984393 2349 log.go:172] (0xc0000e7a20) (0xc00070ab40) Stream removed, broadcasting: 5\n" May 27 00:47:05.989: INFO: stdout: "" May 27 00:47:05.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4554 execpodx999z -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31056' May 27 00:47:06.204: INFO: stderr: "I0527 00:47:06.118108 2371 log.go:172] (0xc000c12fd0) (0xc000bd81e0) Create stream\nI0527 00:47:06.118163 2371 log.go:172] (0xc000c12fd0) (0xc000bd81e0) Stream added, broadcasting: 1\nI0527 00:47:06.124368 2371 log.go:172] (0xc000c12fd0) Reply frame received for 1\nI0527 00:47:06.124427 2371 log.go:172] (0xc000c12fd0) (0xc0006b4aa0) Create stream\nI0527 00:47:06.124443 2371 log.go:172] (0xc000c12fd0) (0xc0006b4aa0) Stream added, broadcasting: 3\nI0527 00:47:06.125763 2371 log.go:172] (0xc000c12fd0) Reply frame received for 3\nI0527 00:47:06.125805 2371 log.go:172] (0xc000c12fd0) (0xc0006b5540) Create stream\nI0527 00:47:06.125818 2371 log.go:172] (0xc000c12fd0) (0xc0006b5540) Stream added, broadcasting: 5\nI0527 00:47:06.126873 2371 log.go:172] (0xc000c12fd0) Reply frame received for 5\nI0527 00:47:06.196608 2371 log.go:172] (0xc000c12fd0) Data frame received for 5\nI0527 00:47:06.196638 2371 log.go:172] (0xc0006b5540) (5) Data frame handling\nI0527 00:47:06.196679 2371 log.go:172] (0xc0006b5540) (5) Data frame sent\nI0527 00:47:06.196714 2371 log.go:172] (0xc000c12fd0) Data frame received for 5\nI0527 00:47:06.196736 2371 log.go:172] (0xc0006b5540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31056\nConnection to 172.17.0.13 31056 port [tcp/31056] succeeded!\nI0527 00:47:06.196776 2371 log.go:172] (0xc0006b5540) (5) Data frame sent\nI0527 00:47:06.197532 2371 log.go:172] (0xc000c12fd0) Data frame received for 3\nI0527 00:47:06.197569 2371 log.go:172] (0xc0006b4aa0) (3) Data frame handling\nI0527 00:47:06.197595 2371 log.go:172] (0xc000c12fd0) Data frame received for 5\nI0527 00:47:06.197614 2371 log.go:172] (0xc0006b5540) (5) Data frame handling\nI0527 00:47:06.198914 2371 log.go:172] (0xc000c12fd0) Data frame received for 1\nI0527 00:47:06.198928 2371 log.go:172] (0xc000bd81e0) (1) Data frame handling\nI0527 00:47:06.198951 2371 log.go:172] (0xc000bd81e0) (1) Data frame sent\nI0527 00:47:06.198963 2371 log.go:172] (0xc000c12fd0) (0xc000bd81e0) Stream removed, broadcasting: 1\nI0527 00:47:06.199148 2371 log.go:172] (0xc000c12fd0) Go away received\nI0527 00:47:06.199281 2371 log.go:172] (0xc000c12fd0) (0xc000bd81e0) Stream removed, broadcasting: 1\nI0527 00:47:06.199301 2371 log.go:172] (0xc000c12fd0) (0xc0006b4aa0) Stream removed, broadcasting: 3\nI0527 00:47:06.199309 2371 log.go:172] (0xc000c12fd0) (0xc0006b5540) Stream removed, broadcasting: 5\n" May 27 00:47:06.204: INFO: stdout: "" May 27 00:47:06.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4554 execpodx999z -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31056' May 27 00:47:06.425: INFO: stderr: "I0527 00:47:06.351226 2391 log.go:172] (0xc000b2bd90) (0xc000b26c80) Create stream\nI0527 00:47:06.351280 2391 log.go:172] (0xc000b2bd90) (0xc000b26c80) Stream added, broadcasting: 1\nI0527 00:47:06.354563 2391 log.go:172] (0xc000b2bd90) Reply frame received for 1\nI0527 00:47:06.354605 2391 log.go:172] (0xc000b2bd90) (0xc000a30460) Create stream\nI0527 00:47:06.354627 2391 log.go:172] (0xc000b2bd90) (0xc000a30460) Stream added, broadcasting: 3\nI0527 00:47:06.355650 2391 log.go:172] (0xc000b2bd90) Reply frame received for 3\nI0527 00:47:06.355693 2391 log.go:172] (0xc000b2bd90) (0xc000a325a0) Create stream\nI0527 00:47:06.355707 2391 log.go:172] (0xc000b2bd90) (0xc000a325a0) Stream added, broadcasting: 5\nI0527 00:47:06.356491 2391 log.go:172] (0xc000b2bd90) Reply frame received for 5\nI0527 00:47:06.418221 2391 log.go:172] (0xc000b2bd90) Data frame received for 3\nI0527 00:47:06.418261 2391 log.go:172] (0xc000a30460) (3) Data frame handling\nI0527 00:47:06.418355 2391 log.go:172] (0xc000b2bd90) Data frame received for 5\nI0527 00:47:06.418531 2391 log.go:172] (0xc000a325a0) (5) Data frame handling\nI0527 00:47:06.418584 2391 log.go:172] (0xc000a325a0) (5) Data frame sent\nI0527 00:47:06.418606 2391 log.go:172] (0xc000b2bd90) Data frame received for 5\nI0527 00:47:06.418623 2391 log.go:172] (0xc000a325a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31056\nConnection to 172.17.0.12 31056 port [tcp/31056] succeeded!\nI0527 00:47:06.419706 2391 log.go:172] (0xc000b2bd90) Data frame received for 1\nI0527 00:47:06.419728 2391 log.go:172] (0xc000b26c80) (1) Data frame handling\nI0527 00:47:06.419760 2391 log.go:172] (0xc000b26c80) (1) Data frame sent\nI0527 00:47:06.419805 2391 log.go:172] (0xc000b2bd90) (0xc000b26c80) Stream removed, broadcasting: 1\nI0527 00:47:06.419863 2391 log.go:172] (0xc000b2bd90) Go away received\nI0527 00:47:06.420334 2391 log.go:172] (0xc000b2bd90) (0xc000b26c80) Stream removed, broadcasting: 1\nI0527 00:47:06.420364 2391 log.go:172] (0xc000b2bd90) (0xc000a30460) Stream removed, broadcasting: 3\nI0527 00:47:06.420375 2391 log.go:172] (0xc000b2bd90) (0xc000a325a0) Stream removed, broadcasting: 5\n" May 27 00:47:06.425: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:47:06.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4554" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.644 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":200,"skipped":3450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:47:06.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 27 00:47:11.036: INFO: Successfully updated pod "pod-update-06c01bf4-257a-4624-ad9d-eb916b78292c" STEP: verifying the updated pod is in kubernetes May 27 00:47:11.061: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:47:11.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8464" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":201,"skipped":3474,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:47:11.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7b296d11-6b4d-44bc-bda6-a641c9558cc4 STEP: Creating a pod to test consume secrets May 27 00:47:11.170: INFO: Waiting up to 5m0s for pod "pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e" in namespace "secrets-1682" to be "Succeeded or Failed" May 27 00:47:11.174: INFO: Pod "pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007268ms May 27 00:47:13.208: INFO: Pod "pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038132971s May 27 00:47:16.006: INFO: Pod "pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e": Phase="Running", Reason="", readiness=true. Elapsed: 4.835610928s May 27 00:47:18.010: INFO: Pod "pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.840144554s STEP: Saw pod success May 27 00:47:18.010: INFO: Pod "pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e" satisfied condition "Succeeded or Failed" May 27 00:47:18.014: INFO: Trying to get logs from node latest-worker pod pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e container secret-env-test: STEP: delete the pod May 27 00:47:18.055: INFO: Waiting for pod pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e to disappear May 27 00:47:18.068: INFO: Pod pod-secrets-4b2108f0-199b-42d0-8cb2-10f0f302c08e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:47:18.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1682" for this suite. • [SLOW TEST:7.003 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3483,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:47:18.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:47:18.840: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:47:20.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137238, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137238, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137238, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137238, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:47:23.977: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:47:24.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9274" for this suite. STEP: Destroying namespace "webhook-9274-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.310 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":203,"skipped":3497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:47:24.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1237 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 27 00:47:24.541: INFO: Found 0 stateful pods, waiting for 3 May 27 00:47:34.657: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 27 00:47:34.657: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 27 00:47:34.657: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 27 00:47:44.563: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 27 00:47:44.563: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 27 00:47:44.563: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 27 00:47:44.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1237 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 00:47:44.795: INFO: stderr: "I0527 00:47:44.686046 2411 log.go:172] (0xc000ac0000) (0xc00070ea00) Create stream\nI0527 00:47:44.686086 2411 log.go:172] (0xc000ac0000) (0xc00070ea00) Stream added, broadcasting: 1\nI0527 00:47:44.687255 2411 log.go:172] (0xc000ac0000) Reply frame received for 1\nI0527 00:47:44.687281 2411 log.go:172] (0xc000ac0000) (0xc00070ef00) Create stream\nI0527 00:47:44.687289 2411 log.go:172] (0xc000ac0000) (0xc00070ef00) Stream added, broadcasting: 3\nI0527 00:47:44.688058 2411 log.go:172] (0xc000ac0000) Reply frame received for 3\nI0527 00:47:44.688080 2411 log.go:172] (0xc000ac0000) (0xc00070f4a0) Create stream\nI0527 00:47:44.688087 2411 log.go:172] (0xc000ac0000) (0xc00070f4a0) Stream added, broadcasting: 5\nI0527 00:47:44.688693 2411 log.go:172] (0xc000ac0000) Reply frame received for 5\nI0527 00:47:44.755305 2411 log.go:172] (0xc000ac0000) Data frame received for 5\nI0527 00:47:44.755333 2411 log.go:172] (0xc00070f4a0) (5) Data frame handling\nI0527 00:47:44.755355 2411 log.go:172] (0xc00070f4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 00:47:44.788699 2411 log.go:172] (0xc000ac0000) Data frame received for 5\nI0527 00:47:44.788724 2411 log.go:172] (0xc000ac0000) Data frame received for 3\nI0527 00:47:44.788739 2411 log.go:172] (0xc00070ef00) (3) Data frame handling\nI0527 00:47:44.788749 2411 log.go:172] (0xc00070ef00) (3) Data frame sent\nI0527 00:47:44.788756 2411 log.go:172] (0xc000ac0000) Data frame received for 3\nI0527 00:47:44.788762 2411 log.go:172] (0xc00070ef00) (3) Data frame handling\nI0527 00:47:44.788793 2411 log.go:172] (0xc00070f4a0) (5) Data frame handling\nI0527 00:47:44.790884 2411 log.go:172] (0xc000ac0000) Data frame received for 1\nI0527 00:47:44.790896 2411 log.go:172] (0xc00070ea00) (1) Data frame handling\nI0527 00:47:44.790902 2411 log.go:172] (0xc00070ea00) (1) Data frame sent\nI0527 00:47:44.790910 2411 log.go:172] (0xc000ac0000) (0xc00070ea00) Stream removed, broadcasting: 1\nI0527 00:47:44.790972 2411 log.go:172] (0xc000ac0000) Go away received\nI0527 00:47:44.791130 2411 log.go:172] (0xc000ac0000) (0xc00070ea00) Stream removed, broadcasting: 1\nI0527 00:47:44.791138 2411 log.go:172] (0xc000ac0000) (0xc00070ef00) Stream removed, broadcasting: 3\nI0527 00:47:44.791143 2411 log.go:172] (0xc000ac0000) (0xc00070f4a0) Stream removed, broadcasting: 5\n" May 27 00:47:44.795: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 00:47:44.795: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 27 00:47:54.826: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 27 00:48:04.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1237 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 00:48:05.085: INFO: stderr: "I0527 00:48:04.996354 2431 log.go:172] (0xc0009353f0) (0xc000b125a0) Create stream\nI0527 00:48:04.996406 2431 log.go:172] (0xc0009353f0) (0xc000b125a0) Stream added, broadcasting: 1\nI0527 00:48:05.001793 2431 log.go:172] (0xc0009353f0) Reply frame received for 1\nI0527 00:48:05.001836 2431 log.go:172] (0xc0009353f0) (0xc0006ecd20) Create stream\nI0527 00:48:05.001848 2431 log.go:172] (0xc0009353f0) (0xc0006ecd20) Stream added, broadcasting: 3\nI0527 00:48:05.002790 2431 log.go:172] (0xc0009353f0) Reply frame received for 3\nI0527 00:48:05.002844 2431 log.go:172] (0xc0009353f0) (0xc00051cdc0) Create stream\nI0527 00:48:05.002860 2431 log.go:172] (0xc0009353f0) (0xc00051cdc0) Stream added, broadcasting: 5\nI0527 00:48:05.003621 2431 log.go:172] (0xc0009353f0) Reply frame received for 5\nI0527 00:48:05.077900 2431 log.go:172] (0xc0009353f0) Data frame received for 5\nI0527 00:48:05.077927 2431 log.go:172] (0xc00051cdc0) (5) Data frame handling\nI0527 00:48:05.077936 2431 log.go:172] (0xc00051cdc0) (5) Data frame sent\nI0527 00:48:05.077943 2431 log.go:172] (0xc0009353f0) Data frame received for 5\nI0527 00:48:05.077948 2431 log.go:172] (0xc00051cdc0) (5) Data frame handling\nI0527 00:48:05.077961 2431 log.go:172] (0xc0009353f0) Data frame received for 3\nI0527 00:48:05.077968 2431 log.go:172] (0xc0006ecd20) (3) Data frame handling\nI0527 00:48:05.077974 2431 log.go:172] (0xc0006ecd20) (3) Data frame sent\nI0527 00:48:05.077981 2431 log.go:172] (0xc0009353f0) Data frame received for 3\nI0527 00:48:05.077990 2431 log.go:172] (0xc0006ecd20) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0527 00:48:05.079561 2431 log.go:172] (0xc0009353f0) Data frame received for 1\nI0527 00:48:05.079607 2431 log.go:172] (0xc000b125a0) (1) Data frame handling\nI0527 00:48:05.079656 2431 log.go:172] (0xc000b125a0) (1) Data frame sent\nI0527 00:48:05.079723 2431 log.go:172] (0xc0009353f0) (0xc000b125a0) Stream removed, broadcasting: 1\nI0527 00:48:05.080004 2431 log.go:172] (0xc0009353f0) Go away received\nI0527 00:48:05.080050 2431 log.go:172] (0xc0009353f0) (0xc000b125a0) Stream removed, broadcasting: 1\nI0527 00:48:05.080099 2431 log.go:172] (0xc0009353f0) (0xc0006ecd20) Stream removed, broadcasting: 3\nI0527 00:48:05.080131 2431 log.go:172] (0xc0009353f0) (0xc00051cdc0) Stream removed, broadcasting: 5\n" May 27 00:48:05.085: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 00:48:05.085: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision May 27 00:48:25.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1237 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 00:48:25.358: INFO: stderr: "I0527 00:48:25.251643 2452 log.go:172] (0xc0006ec370) (0xc00013bf40) Create stream\nI0527 00:48:25.251699 2452 log.go:172] (0xc0006ec370) (0xc00013bf40) Stream added, broadcasting: 1\nI0527 00:48:25.254170 2452 log.go:172] (0xc0006ec370) Reply frame received for 1\nI0527 00:48:25.254209 2452 log.go:172] (0xc0006ec370) (0xc000324500) Create stream\nI0527 00:48:25.254221 2452 log.go:172] (0xc0006ec370) (0xc000324500) Stream added, broadcasting: 3\nI0527 00:48:25.255283 2452 log.go:172] (0xc0006ec370) Reply frame received for 3\nI0527 00:48:25.255352 2452 log.go:172] (0xc0006ec370) (0xc0006badc0) Create stream\nI0527 00:48:25.255379 2452 log.go:172] (0xc0006ec370) (0xc0006badc0) Stream added, broadcasting: 5\nI0527 00:48:25.256161 2452 log.go:172] (0xc0006ec370) Reply frame received for 5\nI0527 00:48:25.313658 2452 log.go:172] (0xc0006ec370) Data frame received for 5\nI0527 00:48:25.313685 2452 log.go:172] (0xc0006badc0) (5) Data frame handling\nI0527 00:48:25.313704 2452 log.go:172] (0xc0006badc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 00:48:25.351864 2452 log.go:172] (0xc0006ec370) Data frame received for 5\nI0527 00:48:25.351966 2452 log.go:172] (0xc0006badc0) (5) Data frame handling\nI0527 00:48:25.351995 2452 log.go:172] (0xc0006ec370) Data frame received for 3\nI0527 00:48:25.352009 2452 log.go:172] (0xc000324500) (3) Data frame handling\nI0527 00:48:25.352022 2452 log.go:172] (0xc000324500) (3) Data frame sent\nI0527 00:48:25.352034 2452 log.go:172] (0xc0006ec370) Data frame received for 3\nI0527 00:48:25.352043 2452 log.go:172] (0xc000324500) (3) Data frame handling\nI0527 00:48:25.353823 2452 log.go:172] (0xc0006ec370) Data frame received for 1\nI0527 00:48:25.353845 2452 log.go:172] (0xc00013bf40) (1) Data frame handling\nI0527 00:48:25.353851 2452 log.go:172] (0xc00013bf40) (1) Data frame sent\nI0527 00:48:25.353861 2452 log.go:172] (0xc0006ec370) (0xc00013bf40) Stream removed, broadcasting: 1\nI0527 00:48:25.353871 2452 log.go:172] (0xc0006ec370) Go away received\nI0527 00:48:25.354185 2452 log.go:172] (0xc0006ec370) (0xc00013bf40) Stream removed, broadcasting: 1\nI0527 00:48:25.354202 2452 log.go:172] (0xc0006ec370) (0xc000324500) Stream removed, broadcasting: 3\nI0527 00:48:25.354210 2452 log.go:172] (0xc0006ec370) (0xc0006badc0) Stream removed, broadcasting: 5\n" May 27 00:48:25.358: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 00:48:25.358: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 00:48:35.394: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 27 00:48:45.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1237 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 00:48:45.799: INFO: stderr: "I0527 00:48:45.597677 2474 log.go:172] (0xc00003a0b0) (0xc00053c1e0) Create stream\nI0527 00:48:45.597745 2474 log.go:172] (0xc00003a0b0) (0xc00053c1e0) Stream added, broadcasting: 1\nI0527 00:48:45.600382 2474 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0527 00:48:45.600425 2474 log.go:172] (0xc00003a0b0) (0xc000508d20) Create stream\nI0527 00:48:45.600437 2474 log.go:172] (0xc00003a0b0) (0xc000508d20) Stream added, broadcasting: 3\nI0527 00:48:45.601866 2474 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0527 00:48:45.602030 2474 log.go:172] (0xc00003a0b0) (0xc00053d180) Create stream\nI0527 00:48:45.602067 2474 log.go:172] (0xc00003a0b0) (0xc00053d180) Stream added, broadcasting: 5\nI0527 00:48:45.603642 2474 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0527 00:48:45.792643 2474 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0527 00:48:45.792700 2474 log.go:172] (0xc000508d20) (3) Data frame handling\nI0527 00:48:45.792721 2474 log.go:172] (0xc000508d20) (3) Data frame sent\nI0527 00:48:45.792736 2474 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0527 00:48:45.792748 2474 log.go:172] (0xc000508d20) (3) Data frame handling\nI0527 00:48:45.792802 2474 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0527 00:48:45.792828 2474 log.go:172] (0xc00053d180) (5) Data frame handling\nI0527 00:48:45.792975 2474 log.go:172] (0xc00053d180) (5) Data frame sent\nI0527 00:48:45.792988 2474 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0527 00:48:45.792995 2474 log.go:172] (0xc00053d180) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0527 00:48:45.794171 2474 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0527 00:48:45.794186 2474 log.go:172] (0xc00053c1e0) (1) Data frame handling\nI0527 00:48:45.794195 2474 log.go:172] (0xc00053c1e0) (1) Data frame sent\nI0527 00:48:45.794204 2474 log.go:172] (0xc00003a0b0) (0xc00053c1e0) Stream removed, broadcasting: 1\nI0527 00:48:45.794214 2474 log.go:172] (0xc00003a0b0) Go away received\nI0527 00:48:45.794608 2474 log.go:172] (0xc00003a0b0) (0xc00053c1e0) Stream removed, broadcasting: 1\nI0527 00:48:45.794626 2474 log.go:172] (0xc00003a0b0) (0xc000508d20) Stream removed, broadcasting: 3\nI0527 00:48:45.794636 2474 log.go:172] (0xc00003a0b0) (0xc00053d180) Stream removed, broadcasting: 5\n" May 27 00:48:45.799: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 00:48:45.799: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 27 00:49:15.818: INFO: Waiting for StatefulSet statefulset-1237/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 27 00:49:25.826: INFO: Deleting all statefulset in ns statefulset-1237 May 27 00:49:25.829: INFO: Scaling statefulset ss2 to 0 May 27 00:49:45.863: INFO: Waiting for statefulset status.replicas updated to 0 May 27 00:49:45.866: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:49:45.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1237" for this suite. • [SLOW TEST:141.522 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":204,"skipped":3533,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:49:45.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:49:45.971: INFO: Waiting up to 5m0s for pod "downwardapi-volume-279568a2-ffed-48d5-8e69-26bf2fbc621f" in namespace "projected-8171" to be "Succeeded or Failed" May 27 00:49:45.985: INFO: Pod "downwardapi-volume-279568a2-ffed-48d5-8e69-26bf2fbc621f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.26147ms May 27 00:49:47.990: INFO: Pod "downwardapi-volume-279568a2-ffed-48d5-8e69-26bf2fbc621f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018806612s May 27 00:49:49.995: INFO: Pod "downwardapi-volume-279568a2-ffed-48d5-8e69-26bf2fbc621f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023790023s STEP: Saw pod success May 27 00:49:49.995: INFO: Pod "downwardapi-volume-279568a2-ffed-48d5-8e69-26bf2fbc621f" satisfied condition "Succeeded or Failed" May 27 00:49:49.998: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-279568a2-ffed-48d5-8e69-26bf2fbc621f container client-container: STEP: delete the pod May 27 00:49:50.078: INFO: Waiting for pod downwardapi-volume-279568a2-ffed-48d5-8e69-26bf2fbc621f to disappear May 27 00:49:50.081: INFO: Pod downwardapi-volume-279568a2-ffed-48d5-8e69-26bf2fbc621f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:49:50.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8171" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3541,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:49:50.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 27 00:49:50.162: INFO: Created pod &Pod{ObjectMeta:{dns-7087 dns-7087 /api/v1/namespaces/dns-7087/pods/dns-7087 c116c1e4-9b1c-4718-9c14-bf64028ac037 7956687 0 2020-05-27 00:49:50 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-27 00:49:50 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gthj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gthj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gthj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 27 00:49:50.171: INFO: The status of Pod dns-7087 is Pending, waiting for it to be Running (with Ready = true) May 27 00:49:52.278: INFO: The status of Pod dns-7087 is Pending, waiting for it to be Running (with Ready = true) May 27 00:49:54.175: INFO: The status of Pod dns-7087 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 27 00:49:54.176: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7087 PodName:dns-7087 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:49:54.176: INFO: >>> kubeConfig: /root/.kube/config I0527 00:49:54.215251 8 log.go:172] (0xc0023be630) (0xc00200e960) Create stream I0527 00:49:54.215288 8 log.go:172] (0xc0023be630) (0xc00200e960) Stream added, broadcasting: 1 I0527 00:49:54.217842 8 log.go:172] (0xc0023be630) Reply frame received for 1 I0527 00:49:54.217902 8 log.go:172] (0xc0023be630) (0xc002a76000) Create stream I0527 00:49:54.217920 8 log.go:172] (0xc0023be630) (0xc002a76000) Stream added, broadcasting: 3 I0527 00:49:54.219120 8 log.go:172] (0xc0023be630) Reply frame received for 3 I0527 00:49:54.219160 8 log.go:172] (0xc0023be630) (0xc00200eaa0) Create stream I0527 00:49:54.219181 8 log.go:172] (0xc0023be630) (0xc00200eaa0) Stream added, broadcasting: 5 I0527 00:49:54.220009 8 log.go:172] (0xc0023be630) Reply frame received for 5 I0527 00:49:54.309784 8 log.go:172] (0xc0023be630) Data frame received for 3 I0527 00:49:54.309813 8 log.go:172] (0xc002a76000) (3) Data frame handling I0527 00:49:54.309831 8 log.go:172] (0xc002a76000) (3) Data frame sent I0527 00:49:54.311682 8 log.go:172] (0xc0023be630) Data frame received for 3 I0527 00:49:54.311721 8 log.go:172] (0xc002a76000) (3) Data frame handling I0527 00:49:54.312102 8 log.go:172] (0xc0023be630) Data frame received for 5 I0527 00:49:54.312122 8 log.go:172] (0xc00200eaa0) (5) Data frame handling I0527 00:49:54.313439 8 log.go:172] (0xc0023be630) Data frame received for 1 I0527 00:49:54.313458 8 log.go:172] (0xc00200e960) (1) Data frame handling I0527 00:49:54.313496 8 log.go:172] (0xc00200e960) (1) Data frame sent I0527 00:49:54.313528 8 log.go:172] (0xc0023be630) (0xc00200e960) Stream removed, broadcasting: 1 I0527 00:49:54.313636 8 log.go:172] (0xc0023be630) (0xc00200e960) Stream removed, broadcasting: 1 I0527 00:49:54.313653 8 log.go:172] (0xc0023be630) (0xc002a76000) Stream removed, broadcasting: 3 I0527 00:49:54.313730 8 log.go:172] (0xc0023be630) Go away received I0527 00:49:54.313789 8 log.go:172] (0xc0023be630) (0xc00200eaa0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 27 00:49:54.313: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7087 PodName:dns-7087 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:49:54.313: INFO: >>> kubeConfig: /root/.kube/config I0527 00:49:54.340653 8 log.go:172] (0xc002aaa420) (0xc002a76320) Create stream I0527 00:49:54.340682 8 log.go:172] (0xc002aaa420) (0xc002a76320) Stream added, broadcasting: 1 I0527 00:49:54.342501 8 log.go:172] (0xc002aaa420) Reply frame received for 1 I0527 00:49:54.342538 8 log.go:172] (0xc002aaa420) (0xc00200ebe0) Create stream I0527 00:49:54.342549 8 log.go:172] (0xc002aaa420) (0xc00200ebe0) Stream added, broadcasting: 3 I0527 00:49:54.343319 8 log.go:172] (0xc002aaa420) Reply frame received for 3 I0527 00:49:54.343348 8 log.go:172] (0xc002aaa420) (0xc00200ec80) Create stream I0527 00:49:54.343370 8 log.go:172] (0xc002aaa420) (0xc00200ec80) Stream added, broadcasting: 5 I0527 00:49:54.344226 8 log.go:172] (0xc002aaa420) Reply frame received for 5 I0527 00:49:54.415426 8 log.go:172] (0xc002aaa420) Data frame received for 3 I0527 00:49:54.415457 8 log.go:172] (0xc00200ebe0) (3) Data frame handling I0527 00:49:54.415468 8 log.go:172] (0xc00200ebe0) (3) Data frame sent I0527 00:49:54.416974 8 log.go:172] (0xc002aaa420) Data frame received for 3 I0527 00:49:54.417019 8 log.go:172] (0xc00200ebe0) (3) Data frame handling I0527 00:49:54.417282 8 log.go:172] (0xc002aaa420) Data frame received for 5 I0527 00:49:54.417300 8 log.go:172] (0xc00200ec80) (5) Data frame handling I0527 00:49:54.418516 8 log.go:172] (0xc002aaa420) Data frame received for 1 I0527 00:49:54.418535 8 log.go:172] (0xc002a76320) (1) Data frame handling I0527 00:49:54.418581 8 log.go:172] (0xc002a76320) (1) Data frame sent I0527 00:49:54.418711 8 log.go:172] (0xc002aaa420) (0xc002a76320) Stream removed, broadcasting: 1 I0527 00:49:54.418745 8 log.go:172] (0xc002aaa420) Go away received I0527 00:49:54.418860 8 log.go:172] (0xc002aaa420) (0xc002a76320) Stream removed, broadcasting: 1 I0527 00:49:54.418876 8 log.go:172] (0xc002aaa420) (0xc00200ebe0) Stream removed, broadcasting: 3 I0527 00:49:54.418884 8 log.go:172] (0xc002aaa420) (0xc00200ec80) Stream removed, broadcasting: 5 May 27 00:49:54.418: INFO: Deleting pod dns-7087... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:49:54.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7087" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":206,"skipped":3549,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:49:54.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:49:54.639: INFO: Creating ReplicaSet my-hostname-basic-43b80894-9c8a-47da-b806-8992bd195703 May 27 00:49:54.697: INFO: Pod name my-hostname-basic-43b80894-9c8a-47da-b806-8992bd195703: Found 0 pods out of 1 May 27 00:49:59.710: INFO: Pod name my-hostname-basic-43b80894-9c8a-47da-b806-8992bd195703: Found 1 pods out of 1 May 27 00:49:59.710: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-43b80894-9c8a-47da-b806-8992bd195703" is running May 27 00:49:59.712: INFO: Pod "my-hostname-basic-43b80894-9c8a-47da-b806-8992bd195703-w9622" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-27 00:49:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-27 00:49:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-27 00:49:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-27 00:49:54 +0000 UTC Reason: Message:}]) May 27 00:49:59.713: INFO: Trying to dial the pod May 27 00:50:04.726: INFO: Controller my-hostname-basic-43b80894-9c8a-47da-b806-8992bd195703: Got expected result from replica 1 [my-hostname-basic-43b80894-9c8a-47da-b806-8992bd195703-w9622]: "my-hostname-basic-43b80894-9c8a-47da-b806-8992bd195703-w9622", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:50:04.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5334" for this suite. • [SLOW TEST:10.274 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":207,"skipped":3569,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:50:04.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:50:04.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be3b39cb-4041-4e10-99ae-dc5b0c8bb801" in namespace "projected-8389" to be "Succeeded or Failed" May 27 00:50:04.838: INFO: Pod "downwardapi-volume-be3b39cb-4041-4e10-99ae-dc5b0c8bb801": Phase="Pending", Reason="", readiness=false. Elapsed: 12.959722ms May 27 00:50:06.843: INFO: Pod "downwardapi-volume-be3b39cb-4041-4e10-99ae-dc5b0c8bb801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017295343s May 27 00:50:08.847: INFO: Pod "downwardapi-volume-be3b39cb-4041-4e10-99ae-dc5b0c8bb801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021311438s STEP: Saw pod success May 27 00:50:08.847: INFO: Pod "downwardapi-volume-be3b39cb-4041-4e10-99ae-dc5b0c8bb801" satisfied condition "Succeeded or Failed" May 27 00:50:08.849: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-be3b39cb-4041-4e10-99ae-dc5b0c8bb801 container client-container: STEP: delete the pod May 27 00:50:08.966: INFO: Waiting for pod downwardapi-volume-be3b39cb-4041-4e10-99ae-dc5b0c8bb801 to disappear May 27 00:50:08.973: INFO: Pod downwardapi-volume-be3b39cb-4041-4e10-99ae-dc5b0c8bb801 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:50:08.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8389" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":208,"skipped":3584,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:50:08.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1703 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1703 STEP: creating replication controller externalsvc in namespace services-1703 I0527 00:50:09.346126 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1703, replica count: 2 I0527 00:50:12.396544 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 00:50:15.396829 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 27 00:50:15.460: INFO: Creating new exec pod May 27 00:50:19.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1703 execpodtp49w -- /bin/sh -x -c nslookup nodeport-service' May 27 00:50:19.762: INFO: stderr: "I0527 00:50:19.641595 2494 log.go:172] (0xc000a4d760) (0xc0006ef680) Create stream\nI0527 00:50:19.641672 2494 log.go:172] (0xc000a4d760) (0xc0006ef680) Stream added, broadcasting: 1\nI0527 00:50:19.644195 2494 log.go:172] (0xc000a4d760) Reply frame received for 1\nI0527 00:50:19.644257 2494 log.go:172] (0xc000a4d760) (0xc000555d60) Create stream\nI0527 00:50:19.644280 2494 log.go:172] (0xc000a4d760) (0xc000555d60) Stream added, broadcasting: 3\nI0527 00:50:19.645545 2494 log.go:172] (0xc000a4d760) Reply frame received for 3\nI0527 00:50:19.645586 2494 log.go:172] (0xc000a4d760) (0xc000485040) Create stream\nI0527 00:50:19.645602 2494 log.go:172] (0xc000a4d760) (0xc000485040) Stream added, broadcasting: 5\nI0527 00:50:19.647126 2494 log.go:172] (0xc000a4d760) Reply frame received for 5\nI0527 00:50:19.719952 2494 log.go:172] (0xc000a4d760) Data frame received for 5\nI0527 00:50:19.719984 2494 log.go:172] (0xc000485040) (5) Data frame handling\nI0527 00:50:19.720015 2494 log.go:172] (0xc000485040) (5) Data frame sent\n+ nslookup nodeport-service\nI0527 00:50:19.754259 2494 log.go:172] (0xc000a4d760) Data frame received for 3\nI0527 00:50:19.754289 2494 log.go:172] (0xc000555d60) (3) Data frame handling\nI0527 00:50:19.754314 2494 log.go:172] (0xc000555d60) (3) Data frame sent\nI0527 00:50:19.755718 2494 log.go:172] (0xc000a4d760) Data frame received for 3\nI0527 00:50:19.755732 2494 log.go:172] (0xc000555d60) (3) Data frame handling\nI0527 00:50:19.755748 2494 log.go:172] (0xc000555d60) (3) Data frame sent\nI0527 00:50:19.756235 2494 log.go:172] (0xc000a4d760) Data frame received for 3\nI0527 00:50:19.756258 2494 log.go:172] (0xc000555d60) (3) Data frame handling\nI0527 00:50:19.756483 2494 log.go:172] (0xc000a4d760) Data frame received for 5\nI0527 00:50:19.756504 2494 log.go:172] (0xc000485040) (5) Data frame handling\nI0527 00:50:19.759148 2494 log.go:172] (0xc000a4d760) Data frame received for 1\nI0527 00:50:19.759167 2494 log.go:172] (0xc0006ef680) (1) Data frame handling\nI0527 00:50:19.759180 2494 log.go:172] (0xc0006ef680) (1) Data frame sent\nI0527 00:50:19.759195 2494 log.go:172] (0xc000a4d760) (0xc0006ef680) Stream removed, broadcasting: 1\nI0527 00:50:19.759499 2494 log.go:172] (0xc000a4d760) (0xc0006ef680) Stream removed, broadcasting: 1\nI0527 00:50:19.759514 2494 log.go:172] (0xc000a4d760) (0xc000555d60) Stream removed, broadcasting: 3\nI0527 00:50:19.759625 2494 log.go:172] (0xc000a4d760) (0xc000485040) Stream removed, broadcasting: 5\n" May 27 00:50:19.762: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1703.svc.cluster.local\tcanonical name = externalsvc.services-1703.svc.cluster.local.\nName:\texternalsvc.services-1703.svc.cluster.local\nAddress: 10.101.148.199\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1703, will wait for the garbage collector to delete the pods May 27 00:50:19.842: INFO: Deleting ReplicationController externalsvc took: 21.554708ms May 27 00:50:19.942: INFO: Terminating ReplicationController externalsvc pods took: 100.259548ms May 27 00:50:35.400: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:50:35.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1703" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.466 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":209,"skipped":3600,"failed":0} SSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:50:35.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:50:35.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-8994" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":210,"skipped":3606,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:50:35.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 27 00:50:40.885: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:50:41.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1801" for this suite. • [SLOW TEST:5.641 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":211,"skipped":3638,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:50:41.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-dcfa39e8-8953-4865-973e-8f3279dc551c [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:50:41.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3300" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":212,"skipped":3657,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:50:41.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3964 STEP: creating a selector STEP: Creating the service pods in kubernetes May 27 00:50:41.763: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 27 00:50:41.863: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 00:50:43.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 00:50:45.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 00:50:47.947: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:50:49.868: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:50:51.868: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:50:53.868: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:50:55.875: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:50:57.867: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:50:59.870: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:51:01.868: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:51:03.867: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 00:51:05.892: INFO: The status of Pod netserver-0 is Running (Ready = true) May 27 00:51:05.897: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 27 00:51:09.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.194:8080/dial?request=hostname&protocol=udp&host=10.244.1.193&port=8081&tries=1'] Namespace:pod-network-test-3964 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:51:09.936: INFO: >>> kubeConfig: /root/.kube/config I0527 00:51:09.986882 8 log.go:172] (0xc002aaa210) (0xc001791720) Create stream I0527 00:51:09.986925 8 log.go:172] (0xc002aaa210) (0xc001791720) Stream added, broadcasting: 1 I0527 00:51:09.988959 8 log.go:172] (0xc002aaa210) Reply frame received for 1 I0527 00:51:09.989004 8 log.go:172] (0xc002aaa210) (0xc0010f7e00) Create stream I0527 00:51:09.989022 8 log.go:172] (0xc002aaa210) (0xc0010f7e00) Stream added, broadcasting: 3 I0527 00:51:09.990365 8 log.go:172] (0xc002aaa210) Reply frame received for 3 I0527 00:51:09.990415 8 log.go:172] (0xc002aaa210) (0xc0005985a0) Create stream I0527 00:51:09.990432 8 log.go:172] (0xc002aaa210) (0xc0005985a0) Stream added, broadcasting: 5 I0527 00:51:09.991522 8 log.go:172] (0xc002aaa210) Reply frame received for 5 I0527 00:51:10.093100 8 log.go:172] (0xc002aaa210) Data frame received for 5 I0527 00:51:10.093404 8 log.go:172] (0xc0005985a0) (5) Data frame handling I0527 00:51:10.093437 8 log.go:172] (0xc002aaa210) Data frame received for 3 I0527 00:51:10.093465 8 log.go:172] (0xc0010f7e00) (3) Data frame handling I0527 00:51:10.093498 8 log.go:172] (0xc0010f7e00) (3) Data frame sent I0527 00:51:10.093521 8 log.go:172] (0xc002aaa210) Data frame received for 3 I0527 00:51:10.093533 8 log.go:172] (0xc0010f7e00) (3) Data frame handling I0527 00:51:10.095333 8 log.go:172] (0xc002aaa210) Data frame received for 1 I0527 00:51:10.095355 8 log.go:172] (0xc001791720) (1) Data frame handling I0527 00:51:10.095376 8 log.go:172] (0xc001791720) (1) Data frame sent I0527 00:51:10.095391 8 log.go:172] (0xc002aaa210) (0xc001791720) Stream removed, broadcasting: 1 I0527 00:51:10.095407 8 log.go:172] (0xc002aaa210) Go away received I0527 00:51:10.095569 8 log.go:172] (0xc002aaa210) (0xc001791720) Stream removed, broadcasting: 1 I0527 00:51:10.095591 8 log.go:172] (0xc002aaa210) (0xc0010f7e00) Stream removed, broadcasting: 3 I0527 00:51:10.095617 8 log.go:172] (0xc002aaa210) (0xc0005985a0) Stream removed, broadcasting: 5 May 27 00:51:10.095: INFO: Waiting for responses: map[] May 27 00:51:10.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.194:8080/dial?request=hostname&protocol=udp&host=10.244.2.202&port=8081&tries=1'] Namespace:pod-network-test-3964 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:51:10.099: INFO: >>> kubeConfig: /root/.kube/config I0527 00:51:10.135528 8 log.go:172] (0xc002aaa8f0) (0xc000d9c5a0) Create stream I0527 00:51:10.135552 8 log.go:172] (0xc002aaa8f0) (0xc000d9c5a0) Stream added, broadcasting: 1 I0527 00:51:10.137690 8 log.go:172] (0xc002aaa8f0) Reply frame received for 1 I0527 00:51:10.137737 8 log.go:172] (0xc002aaa8f0) (0xc001fbe0a0) Create stream I0527 00:51:10.137763 8 log.go:172] (0xc002aaa8f0) (0xc001fbe0a0) Stream added, broadcasting: 3 I0527 00:51:10.138865 8 log.go:172] (0xc002aaa8f0) Reply frame received for 3 I0527 00:51:10.138921 8 log.go:172] (0xc002aaa8f0) (0xc000d9c780) Create stream I0527 00:51:10.138944 8 log.go:172] (0xc002aaa8f0) (0xc000d9c780) Stream added, broadcasting: 5 I0527 00:51:10.139957 8 log.go:172] (0xc002aaa8f0) Reply frame received for 5 I0527 00:51:10.214510 8 log.go:172] (0xc002aaa8f0) Data frame received for 3 I0527 00:51:10.214555 8 log.go:172] (0xc001fbe0a0) (3) Data frame handling I0527 00:51:10.214581 8 log.go:172] (0xc001fbe0a0) (3) Data frame sent I0527 00:51:10.215083 8 log.go:172] (0xc002aaa8f0) Data frame received for 5 I0527 00:51:10.215107 8 log.go:172] (0xc000d9c780) (5) Data frame handling I0527 00:51:10.215228 8 log.go:172] (0xc002aaa8f0) Data frame received for 3 I0527 00:51:10.215255 8 log.go:172] (0xc001fbe0a0) (3) Data frame handling I0527 00:51:10.216546 8 log.go:172] (0xc002aaa8f0) Data frame received for 1 I0527 00:51:10.216579 8 log.go:172] (0xc000d9c5a0) (1) Data frame handling I0527 00:51:10.216605 8 log.go:172] (0xc000d9c5a0) (1) Data frame sent I0527 00:51:10.216642 8 log.go:172] (0xc002aaa8f0) (0xc000d9c5a0) Stream removed, broadcasting: 1 I0527 00:51:10.216665 8 log.go:172] (0xc002aaa8f0) Go away received I0527 00:51:10.216769 8 log.go:172] (0xc002aaa8f0) (0xc000d9c5a0) Stream removed, broadcasting: 1 I0527 00:51:10.216796 8 log.go:172] (0xc002aaa8f0) (0xc001fbe0a0) Stream removed, broadcasting: 3 I0527 00:51:10.216818 8 log.go:172] (0xc002aaa8f0) (0xc000d9c780) Stream removed, broadcasting: 5 May 27 00:51:10.216: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:51:10.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3964" for this suite. • [SLOW TEST:28.513 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3667,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:51:10.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:51:10.980: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:51:12.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137471, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137471, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137471, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137470, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:51:16.028: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:51:16.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8725" for this suite. STEP: Destroying namespace "webhook-8725-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.662 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":214,"skipped":3687,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:51:16.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 27 00:51:17.593: INFO: >>> kubeConfig: /root/.kube/config May 27 00:51:20.552: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:51:32.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2402" for this suite. • [SLOW TEST:15.324 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":215,"skipped":3694,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:51:32.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-58888bfd-edfc-471a-8c32-1a8c534e24bf in namespace container-probe-4506 May 27 00:51:36.364: INFO: Started pod liveness-58888bfd-edfc-471a-8c32-1a8c534e24bf in namespace container-probe-4506 STEP: checking the pod's current state and verifying that restartCount is present May 27 00:51:36.367: INFO: Initial restart count of pod liveness-58888bfd-edfc-471a-8c32-1a8c534e24bf is 0 May 27 00:51:58.418: INFO: Restart count of pod container-probe-4506/liveness-58888bfd-edfc-471a-8c32-1a8c534e24bf is now 1 (22.051708153s elapsed) May 27 00:52:16.471: INFO: Restart count of pod container-probe-4506/liveness-58888bfd-edfc-471a-8c32-1a8c534e24bf is now 2 (40.104522871s elapsed) May 27 00:52:36.518: INFO: Restart count of pod container-probe-4506/liveness-58888bfd-edfc-471a-8c32-1a8c534e24bf is now 3 (1m0.151502211s elapsed) May 27 00:52:56.565: INFO: Restart count of pod container-probe-4506/liveness-58888bfd-edfc-471a-8c32-1a8c534e24bf is now 4 (1m20.198612486s elapsed) May 27 00:54:06.788: INFO: Restart count of pod container-probe-4506/liveness-58888bfd-edfc-471a-8c32-1a8c534e24bf is now 5 (2m30.42175836s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:54:06.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4506" for this suite. • [SLOW TEST:154.599 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3714,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:54:06.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 27 00:54:11.452: INFO: Successfully updated pod "pod-update-activedeadlineseconds-04a1f671-31bc-491a-b5df-889d881274ef" May 27 00:54:11.452: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-04a1f671-31bc-491a-b5df-889d881274ef" in namespace "pods-4338" to be "terminated due to deadline exceeded" May 27 00:54:11.499: INFO: Pod "pod-update-activedeadlineseconds-04a1f671-31bc-491a-b5df-889d881274ef": Phase="Running", Reason="", readiness=true. Elapsed: 46.950561ms May 27 00:54:13.503: INFO: Pod "pod-update-activedeadlineseconds-04a1f671-31bc-491a-b5df-889d881274ef": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.051231833s May 27 00:54:13.503: INFO: Pod "pod-update-activedeadlineseconds-04a1f671-31bc-491a-b5df-889d881274ef" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:54:13.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4338" for this suite. • [SLOW TEST:6.699 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":217,"skipped":3730,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:54:13.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 27 00:54:13.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-5986 -- logs-generator --log-lines-total 100 --run-duration 20s' May 27 00:54:13.723: INFO: stderr: "" May 27 00:54:13.723: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 27 00:54:13.723: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 27 00:54:13.723: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5986" to be "running and ready, or succeeded" May 27 00:54:13.736: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 13.164425ms May 27 00:54:15.859: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13615841s May 27 00:54:17.863: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.140228841s May 27 00:54:17.864: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 27 00:54:17.864: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 27 00:54:17.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5986' May 27 00:54:17.988: INFO: stderr: "" May 27 00:54:17.988: INFO: stdout: "I0527 00:54:16.090428 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/dgb 582\nI0527 00:54:16.290563 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/4fj 585\nI0527 00:54:16.490697 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/l9z8 391\nI0527 00:54:16.690683 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/xb9j 266\nI0527 00:54:16.890635 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/9ps 464\nI0527 00:54:17.090639 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/vwj 292\nI0527 00:54:17.290589 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/n47h 249\nI0527 00:54:17.490599 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/9kgl 526\nI0527 00:54:17.690602 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/kmrk 367\nI0527 00:54:17.890629 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/kct 454\n" STEP: limiting log lines May 27 00:54:17.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5986 --tail=1' May 27 00:54:18.095: INFO: stderr: "" May 27 00:54:18.095: INFO: stdout: "I0527 00:54:18.090606 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/6ss 311\n" May 27 00:54:18.095: INFO: got output "I0527 00:54:18.090606 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/6ss 311\n" STEP: limiting log bytes May 27 00:54:18.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5986 --limit-bytes=1' May 27 00:54:18.197: INFO: stderr: "" May 27 00:54:18.197: INFO: stdout: "I" May 27 00:54:18.197: INFO: got output "I" STEP: exposing timestamps May 27 00:54:18.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5986 --tail=1 --timestamps' May 27 00:54:18.306: INFO: stderr: "" May 27 00:54:18.306: INFO: stdout: "2020-05-27T00:54:18.290819265Z I0527 00:54:18.290651 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/wcn 348\n" May 27 00:54:18.306: INFO: got output "2020-05-27T00:54:18.290819265Z I0527 00:54:18.290651 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/wcn 348\n" STEP: restricting to a time range May 27 00:54:20.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5986 --since=1s' May 27 00:54:20.930: INFO: stderr: "" May 27 00:54:20.930: INFO: stdout: "I0527 00:54:20.090617 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/vcx 232\nI0527 00:54:20.290633 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/ln9f 216\nI0527 00:54:20.490636 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/d85 315\nI0527 00:54:20.690655 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/6lg 447\nI0527 00:54:20.890695 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/jm22 306\n" May 27 00:54:20.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5986 --since=24h' May 27 00:54:21.062: INFO: stderr: "" May 27 00:54:21.062: INFO: stdout: "I0527 00:54:16.090428 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/dgb 582\nI0527 00:54:16.290563 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/4fj 585\nI0527 00:54:16.490697 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/l9z8 391\nI0527 00:54:16.690683 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/xb9j 266\nI0527 00:54:16.890635 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/9ps 464\nI0527 00:54:17.090639 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/vwj 292\nI0527 00:54:17.290589 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/n47h 249\nI0527 00:54:17.490599 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/9kgl 526\nI0527 00:54:17.690602 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/kmrk 367\nI0527 00:54:17.890629 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/kct 454\nI0527 00:54:18.090606 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/6ss 311\nI0527 00:54:18.290651 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/wcn 348\nI0527 00:54:18.490658 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/cjf 386\nI0527 00:54:18.690629 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/4k8 344\nI0527 00:54:18.890623 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/fc76 271\nI0527 00:54:19.090663 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/wtj 467\nI0527 00:54:19.290610 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/qxp 590\nI0527 00:54:19.490680 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/78q5 262\nI0527 00:54:19.690596 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/njx 543\nI0527 00:54:19.890640 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/rdr 215\nI0527 00:54:20.090617 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/vcx 232\nI0527 00:54:20.290633 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/ln9f 216\nI0527 00:54:20.490636 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/d85 315\nI0527 00:54:20.690655 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/6lg 447\nI0527 00:54:20.890695 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/jm22 306\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 27 00:54:21.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5986' May 27 00:54:34.876: INFO: stderr: "" May 27 00:54:34.876: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:54:34.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5986" for this suite. • [SLOW TEST:21.385 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":218,"skipped":3747,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:54:34.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:54:35.012: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Pending, waiting for it to be Running (with Ready = true) May 27 00:54:37.043: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Pending, waiting for it to be Running (with Ready = true) May 27 00:54:39.017: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:41.016: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:43.017: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:45.017: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:47.019: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:49.017: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:51.017: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:53.017: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:55.015: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = false) May 27 00:54:57.017: INFO: The status of Pod test-webserver-b0a3a65e-8c65-41c4-9ac3-2871cd934a2e is Running (Ready = true) May 27 00:54:57.020: INFO: Container started at 2020-05-27 00:54:37 +0000 UTC, pod became ready at 2020-05-27 00:54:55 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:54:57.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2895" for this suite. • [SLOW TEST:22.132 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":219,"skipped":3748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:54:57.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:54:57.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2315" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":220,"skipped":3811,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:54:57.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:54:57.377: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 27 00:54:59.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5839 create -f -' May 27 00:55:03.334: INFO: stderr: "" May 27 00:55:03.334: INFO: stdout: "e2e-test-crd-publish-openapi-2667-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 27 00:55:03.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5839 delete e2e-test-crd-publish-openapi-2667-crds test-foo' May 27 00:55:03.672: INFO: stderr: "" May 27 00:55:03.672: INFO: stdout: "e2e-test-crd-publish-openapi-2667-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 27 00:55:03.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5839 apply -f -' May 27 00:55:04.499: INFO: stderr: "" May 27 00:55:04.499: INFO: stdout: "e2e-test-crd-publish-openapi-2667-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 27 00:55:04.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5839 delete e2e-test-crd-publish-openapi-2667-crds test-foo' May 27 00:55:04.649: INFO: stderr: "" May 27 00:55:04.649: INFO: stdout: "e2e-test-crd-publish-openapi-2667-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 27 00:55:04.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5839 create -f -' May 27 00:55:04.936: INFO: rc: 1 May 27 00:55:04.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5839 apply -f -' May 27 00:55:05.167: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 27 00:55:05.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5839 create -f -' May 27 00:55:05.500: INFO: rc: 1 May 27 00:55:05.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5839 apply -f -' May 27 00:55:05.736: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 27 00:55:05.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2667-crds' May 27 00:55:06.019: INFO: stderr: "" May 27 00:55:06.019: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2667-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 27 00:55:06.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2667-crds.metadata' May 27 00:55:06.248: INFO: stderr: "" May 27 00:55:06.249: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2667-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 27 00:55:06.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2667-crds.spec' May 27 00:55:06.510: INFO: stderr: "" May 27 00:55:06.510: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2667-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 27 00:55:06.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2667-crds.spec.bars' May 27 00:55:06.786: INFO: stderr: "" May 27 00:55:06.786: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2667-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 27 00:55:06.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2667-crds.spec.bars2' May 27 00:55:07.045: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:09.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5839" for this suite. • [SLOW TEST:12.708 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":221,"skipped":3826,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:09.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 27 00:55:16.078: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8841 PodName:pod-sharedvolume-a8752743-0e08-4c2e-93dd-d8d9f4d99428 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 00:55:16.078: INFO: >>> kubeConfig: /root/.kube/config I0527 00:55:16.116320 8 log.go:172] (0xc0025740b0) (0xc0012a5f40) Create stream I0527 00:55:16.116364 8 log.go:172] (0xc0025740b0) (0xc0012a5f40) Stream added, broadcasting: 1 I0527 00:55:16.118725 8 log.go:172] (0xc0025740b0) Reply frame received for 1 I0527 00:55:16.118778 8 log.go:172] (0xc0025740b0) (0xc001060000) Create stream I0527 00:55:16.118793 8 log.go:172] (0xc0025740b0) (0xc001060000) Stream added, broadcasting: 3 I0527 00:55:16.119967 8 log.go:172] (0xc0025740b0) Reply frame received for 3 I0527 00:55:16.120006 8 log.go:172] (0xc0025740b0) (0xc000316500) Create stream I0527 00:55:16.120022 8 log.go:172] (0xc0025740b0) (0xc000316500) Stream added, broadcasting: 5 I0527 00:55:16.121076 8 log.go:172] (0xc0025740b0) Reply frame received for 5 I0527 00:55:16.222010 8 log.go:172] (0xc0025740b0) Data frame received for 5 I0527 00:55:16.222077 8 log.go:172] (0xc000316500) (5) Data frame handling I0527 00:55:16.222117 8 log.go:172] (0xc0025740b0) Data frame received for 3 I0527 00:55:16.222145 8 log.go:172] (0xc001060000) (3) Data frame handling I0527 00:55:16.222176 8 log.go:172] (0xc001060000) (3) Data frame sent I0527 00:55:16.222197 8 log.go:172] (0xc0025740b0) Data frame received for 3 I0527 00:55:16.222212 8 log.go:172] (0xc001060000) (3) Data frame handling I0527 00:55:16.223985 8 log.go:172] (0xc0025740b0) Data frame received for 1 I0527 00:55:16.224061 8 log.go:172] (0xc0012a5f40) (1) Data frame handling I0527 00:55:16.224146 8 log.go:172] (0xc0012a5f40) (1) Data frame sent I0527 00:55:16.224182 8 log.go:172] (0xc0025740b0) (0xc0012a5f40) Stream removed, broadcasting: 1 I0527 00:55:16.224226 8 log.go:172] (0xc0025740b0) Go away received I0527 00:55:16.224322 8 log.go:172] (0xc0025740b0) (0xc0012a5f40) Stream removed, broadcasting: 1 I0527 00:55:16.224342 8 log.go:172] (0xc0025740b0) (0xc001060000) Stream removed, broadcasting: 3 I0527 00:55:16.224349 8 log.go:172] (0xc0025740b0) (0xc000316500) Stream removed, broadcasting: 5 May 27 00:55:16.224: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:16.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8841" for this suite. • [SLOW TEST:6.261 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":222,"skipped":3832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:16.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-2e30a52c-772f-47a4-94e1-3897cb0ffd2a STEP: Creating a pod to test consume secrets May 27 00:55:16.379: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-614a6e97-8fe7-4daa-a4fa-8af84a94e426" in namespace "projected-206" to be "Succeeded or Failed" May 27 00:55:16.410: INFO: Pod "pod-projected-secrets-614a6e97-8fe7-4daa-a4fa-8af84a94e426": Phase="Pending", Reason="", readiness=false. Elapsed: 30.978603ms May 27 00:55:18.413: INFO: Pod "pod-projected-secrets-614a6e97-8fe7-4daa-a4fa-8af84a94e426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034649123s May 27 00:55:20.416: INFO: Pod "pod-projected-secrets-614a6e97-8fe7-4daa-a4fa-8af84a94e426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037589838s STEP: Saw pod success May 27 00:55:20.416: INFO: Pod "pod-projected-secrets-614a6e97-8fe7-4daa-a4fa-8af84a94e426" satisfied condition "Succeeded or Failed" May 27 00:55:20.420: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-614a6e97-8fe7-4daa-a4fa-8af84a94e426 container projected-secret-volume-test: STEP: delete the pod May 27 00:55:20.507: INFO: Waiting for pod pod-projected-secrets-614a6e97-8fe7-4daa-a4fa-8af84a94e426 to disappear May 27 00:55:20.770: INFO: Pod pod-projected-secrets-614a6e97-8fe7-4daa-a4fa-8af84a94e426 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:20.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-206" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":223,"skipped":3859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:20.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 00:55:21.733: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 00:55:23.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137721, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137721, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137721, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726137721, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 00:55:26.785: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:55:26.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1254-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:27.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-286" for this suite. STEP: Destroying namespace "webhook-286-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.253 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":224,"skipped":3897,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:28.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 27 00:55:36.127: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 27 00:55:36.151: INFO: Pod pod-with-poststart-exec-hook still exists May 27 00:55:38.151: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 27 00:55:38.156: INFO: Pod pod-with-poststart-exec-hook still exists May 27 00:55:40.151: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 27 00:55:40.155: INFO: Pod pod-with-poststart-exec-hook still exists May 27 00:55:42.151: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 27 00:55:42.156: INFO: Pod pod-with-poststart-exec-hook still exists May 27 00:55:44.152: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 27 00:55:44.156: INFO: Pod pod-with-poststart-exec-hook still exists May 27 00:55:46.151: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 27 00:55:46.155: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:46.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1136" for this suite. • [SLOW TEST:18.130 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3917,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:46.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:55:46.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2" in namespace "projected-8798" to be "Succeeded or Failed" May 27 00:55:46.333: INFO: Pod "downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.524183ms May 27 00:55:48.337: INFO: Pod "downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044218318s May 27 00:55:50.342: INFO: Pod "downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2": Phase="Running", Reason="", readiness=true. Elapsed: 4.048970663s May 27 00:55:52.346: INFO: Pod "downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052228672s STEP: Saw pod success May 27 00:55:52.346: INFO: Pod "downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2" satisfied condition "Succeeded or Failed" May 27 00:55:52.423: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2 container client-container: STEP: delete the pod May 27 00:55:52.467: INFO: Waiting for pod downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2 to disappear May 27 00:55:52.474: INFO: Pod downwardapi-volume-d02b4e7d-38b8-4c7e-9ba9-aab13aa490d2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:52.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8798" for this suite. • [SLOW TEST:6.317 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":226,"skipped":3923,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:52.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:55:52.579: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:53.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1485" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":227,"skipped":3923,"failed":0} ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:53.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 00:55:53.722: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b306cbd1-e000-4b21-98c8-934313eb8b9e" in namespace "security-context-test-4990" to be "Succeeded or Failed" May 27 00:55:53.752: INFO: Pod "busybox-readonly-false-b306cbd1-e000-4b21-98c8-934313eb8b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.419545ms May 27 00:55:55.885: INFO: Pod "busybox-readonly-false-b306cbd1-e000-4b21-98c8-934313eb8b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162685707s May 27 00:55:57.888: INFO: Pod "busybox-readonly-false-b306cbd1-e000-4b21-98c8-934313eb8b9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165941964s May 27 00:55:57.888: INFO: Pod "busybox-readonly-false-b306cbd1-e000-4b21-98c8-934313eb8b9e" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:57.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4990" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3923,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:58.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:55:58.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4572" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":229,"skipped":3938,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:55:58.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:55:58.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061" in namespace "projected-581" to be "Succeeded or Failed" May 27 00:55:58.202: INFO: Pod "downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061": Phase="Pending", Reason="", readiness=false. Elapsed: 8.632674ms May 27 00:56:02.178: INFO: Pod "downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061": Phase="Pending", Reason="", readiness=false. Elapsed: 3.984519973s May 27 00:56:04.210: INFO: Pod "downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016538753s May 27 00:56:06.213: INFO: Pod "downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020021376s STEP: Saw pod success May 27 00:56:06.213: INFO: Pod "downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061" satisfied condition "Succeeded or Failed" May 27 00:56:06.216: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061 container client-container: STEP: delete the pod May 27 00:56:06.329: INFO: Waiting for pod downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061 to disappear May 27 00:56:06.334: INFO: Pod downwardapi-volume-0d6fc7a5-76cc-4235-89c3-92851830f061 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:56:06.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-581" for this suite. • [SLOW TEST:8.205 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":230,"skipped":3945,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:56:06.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 00:56:06.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b81dffc-698b-4087-89a9-ab7448bd0ee8" in namespace "downward-api-8569" to be "Succeeded or Failed" May 27 00:56:06.482: INFO: Pod "downwardapi-volume-9b81dffc-698b-4087-89a9-ab7448bd0ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.949162ms May 27 00:56:08.486: INFO: Pod "downwardapi-volume-9b81dffc-698b-4087-89a9-ab7448bd0ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023890625s May 27 00:56:10.491: INFO: Pod "downwardapi-volume-9b81dffc-698b-4087-89a9-ab7448bd0ee8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028868843s STEP: Saw pod success May 27 00:56:10.491: INFO: Pod "downwardapi-volume-9b81dffc-698b-4087-89a9-ab7448bd0ee8" satisfied condition "Succeeded or Failed" May 27 00:56:10.494: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9b81dffc-698b-4087-89a9-ab7448bd0ee8 container client-container: STEP: delete the pod May 27 00:56:10.547: INFO: Waiting for pod downwardapi-volume-9b81dffc-698b-4087-89a9-ab7448bd0ee8 to disappear May 27 00:56:10.564: INFO: Pod downwardapi-volume-9b81dffc-698b-4087-89a9-ab7448bd0ee8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:56:10.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8569" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3966,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:56:10.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2b8f8e21-1379-4bdc-a406-567af6f14715 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2b8f8e21-1379-4bdc-a406-567af6f14715 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:57:41.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1633" for this suite. • [SLOW TEST:90.605 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3969,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:57:41.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 27 00:57:41.886: INFO: Pod name wrapped-volume-race-84e1c8ed-f12d-474f-b3bb-26f9cdc424a2: Found 0 pods out of 5 May 27 00:57:47.298: INFO: Pod name wrapped-volume-race-84e1c8ed-f12d-474f-b3bb-26f9cdc424a2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-84e1c8ed-f12d-474f-b3bb-26f9cdc424a2 in namespace emptydir-wrapper-7080, will wait for the garbage collector to delete the pods May 27 00:57:59.489: INFO: Deleting ReplicationController wrapped-volume-race-84e1c8ed-f12d-474f-b3bb-26f9cdc424a2 took: 8.239838ms May 27 00:57:59.789: INFO: Terminating ReplicationController wrapped-volume-race-84e1c8ed-f12d-474f-b3bb-26f9cdc424a2 pods took: 300.433277ms STEP: Creating RC which spawns configmap-volume pods May 27 00:58:15.556: INFO: Pod name wrapped-volume-race-f84a0c2f-8c4c-4cdf-acec-885823878b2b: Found 0 pods out of 5 May 27 00:58:20.565: INFO: Pod name wrapped-volume-race-f84a0c2f-8c4c-4cdf-acec-885823878b2b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f84a0c2f-8c4c-4cdf-acec-885823878b2b in namespace emptydir-wrapper-7080, will wait for the garbage collector to delete the pods May 27 00:58:36.758: INFO: Deleting ReplicationController wrapped-volume-race-f84a0c2f-8c4c-4cdf-acec-885823878b2b took: 7.986467ms May 27 00:58:37.159: INFO: Terminating ReplicationController wrapped-volume-race-f84a0c2f-8c4c-4cdf-acec-885823878b2b pods took: 400.26988ms STEP: Creating RC which spawns configmap-volume pods May 27 00:58:45.117: INFO: Pod name wrapped-volume-race-25ae9cc5-21e9-4335-8e4a-62b77e087d44: Found 0 pods out of 5 May 27 00:58:50.126: INFO: Pod name wrapped-volume-race-25ae9cc5-21e9-4335-8e4a-62b77e087d44: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-25ae9cc5-21e9-4335-8e4a-62b77e087d44 in namespace emptydir-wrapper-7080, will wait for the garbage collector to delete the pods May 27 00:59:04.243: INFO: Deleting ReplicationController wrapped-volume-race-25ae9cc5-21e9-4335-8e4a-62b77e087d44 took: 29.444722ms May 27 00:59:04.643: INFO: Terminating ReplicationController wrapped-volume-race-25ae9cc5-21e9-4335-8e4a-62b77e087d44 pods took: 400.275449ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 00:59:16.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7080" for this suite. • [SLOW TEST:95.013 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":233,"skipped":3976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 00:59:16.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:00:16.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5802" for this suite. • [SLOW TEST:60.116 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":234,"skipped":4005,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:00:16.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 01:00:16.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e86c933-2f2f-4a9d-9ca7-653981f8bcd5" in namespace "projected-3964" to be "Succeeded or Failed" May 27 01:00:16.484: INFO: Pod "downwardapi-volume-3e86c933-2f2f-4a9d-9ca7-653981f8bcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571681ms May 27 01:00:18.587: INFO: Pod "downwardapi-volume-3e86c933-2f2f-4a9d-9ca7-653981f8bcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107743024s May 27 01:00:20.591: INFO: Pod "downwardapi-volume-3e86c933-2f2f-4a9d-9ca7-653981f8bcd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111592604s STEP: Saw pod success May 27 01:00:20.591: INFO: Pod "downwardapi-volume-3e86c933-2f2f-4a9d-9ca7-653981f8bcd5" satisfied condition "Succeeded or Failed" May 27 01:00:20.593: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3e86c933-2f2f-4a9d-9ca7-653981f8bcd5 container client-container: STEP: delete the pod May 27 01:00:20.668: INFO: Waiting for pod downwardapi-volume-3e86c933-2f2f-4a9d-9ca7-653981f8bcd5 to disappear May 27 01:00:20.682: INFO: Pod downwardapi-volume-3e86c933-2f2f-4a9d-9ca7-653981f8bcd5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:00:20.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3964" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":235,"skipped":4013,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:00:20.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 01:00:20.764: INFO: Waiting up to 5m0s for pod "busybox-user-65534-81bb41bf-8d9b-4d7b-abbd-7fa2faa48090" in namespace "security-context-test-2633" to be "Succeeded or Failed" May 27 01:00:20.782: INFO: Pod "busybox-user-65534-81bb41bf-8d9b-4d7b-abbd-7fa2faa48090": Phase="Pending", Reason="", readiness=false. Elapsed: 17.630966ms May 27 01:00:22.786: INFO: Pod "busybox-user-65534-81bb41bf-8d9b-4d7b-abbd-7fa2faa48090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021276273s May 27 01:00:24.790: INFO: Pod "busybox-user-65534-81bb41bf-8d9b-4d7b-abbd-7fa2faa48090": Phase="Running", Reason="", readiness=true. Elapsed: 4.025598094s May 27 01:00:26.794: INFO: Pod "busybox-user-65534-81bb41bf-8d9b-4d7b-abbd-7fa2faa48090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02934907s May 27 01:00:26.794: INFO: Pod "busybox-user-65534-81bb41bf-8d9b-4d7b-abbd-7fa2faa48090" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:00:26.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2633" for this suite. • [SLOW TEST:6.119 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":4014,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:00:26.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 01:00:26.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ed9fef3-93d5-47a7-9a2f-71c914cafabf" in namespace "downward-api-9213" to be "Succeeded or Failed" May 27 01:00:26.899: INFO: Pod "downwardapi-volume-9ed9fef3-93d5-47a7-9a2f-71c914cafabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.626178ms May 27 01:00:28.952: INFO: Pod "downwardapi-volume-9ed9fef3-93d5-47a7-9a2f-71c914cafabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056154476s May 27 01:00:30.957: INFO: Pod "downwardapi-volume-9ed9fef3-93d5-47a7-9a2f-71c914cafabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060852368s STEP: Saw pod success May 27 01:00:30.957: INFO: Pod "downwardapi-volume-9ed9fef3-93d5-47a7-9a2f-71c914cafabf" satisfied condition "Succeeded or Failed" May 27 01:00:30.960: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9ed9fef3-93d5-47a7-9a2f-71c914cafabf container client-container: STEP: delete the pod May 27 01:00:30.997: INFO: Waiting for pod downwardapi-volume-9ed9fef3-93d5-47a7-9a2f-71c914cafabf to disappear May 27 01:00:30.999: INFO: Pod downwardapi-volume-9ed9fef3-93d5-47a7-9a2f-71c914cafabf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:00:30.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9213" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":4017,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:00:31.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 27 01:00:31.159: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7999 /api/v1/namespaces/watch-7999/configmaps/e2e-watch-test-resource-version db2ad7bf-5526-42ec-81f9-62650877d858 7960418 0 2020-05-27 01:00:31 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-27 01:00:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 27 01:00:31.159: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7999 /api/v1/namespaces/watch-7999/configmaps/e2e-watch-test-resource-version db2ad7bf-5526-42ec-81f9-62650877d858 7960419 0 2020-05-27 01:00:31 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-27 01:00:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:00:31.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7999" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":238,"skipped":4020,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:00:31.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-2f4e01c7-d3aa-4b49-91c3-ddd93c22b6c4 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-2f4e01c7-d3aa-4b49-91c3-ddd93c22b6c4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:01:53.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4907" for this suite. • [SLOW TEST:82.558 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":4031,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:01:53.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 27 01:01:58.045: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:01:58.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1585" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":4043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:01:58.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:02:14.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8260" for this suite. • [SLOW TEST:16.811 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":241,"skipped":4066,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:02:14.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 27 01:02:15.012: INFO: Waiting up to 5m0s for pod "pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867" in namespace "emptydir-3232" to be "Succeeded or Failed" May 27 01:02:15.015: INFO: Pod "pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867": Phase="Pending", Reason="", readiness=false. Elapsed: 2.448506ms May 27 01:02:17.019: INFO: Pod "pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006444323s May 27 01:02:19.326: INFO: Pod "pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313612076s May 27 01:02:21.330: INFO: Pod "pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.318170972s STEP: Saw pod success May 27 01:02:21.331: INFO: Pod "pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867" satisfied condition "Succeeded or Failed" May 27 01:02:21.334: INFO: Trying to get logs from node latest-worker2 pod pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867 container test-container: STEP: delete the pod May 27 01:02:21.410: INFO: Waiting for pod pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867 to disappear May 27 01:02:21.428: INFO: Pod pod-23a7f5dc-298d-4cbe-aaf1-676cc13f1867 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:02:21.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3232" for this suite. • [SLOW TEST:6.521 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":242,"skipped":4071,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:02:21.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 27 01:02:21.528: INFO: Waiting up to 5m0s for pod "var-expansion-e2741e5e-7425-425e-84da-a3492e10c12a" in namespace "var-expansion-8369" to be "Succeeded or Failed" May 27 01:02:21.546: INFO: Pod "var-expansion-e2741e5e-7425-425e-84da-a3492e10c12a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.79138ms May 27 01:02:23.567: INFO: Pod "var-expansion-e2741e5e-7425-425e-84da-a3492e10c12a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038976298s May 27 01:02:25.584: INFO: Pod "var-expansion-e2741e5e-7425-425e-84da-a3492e10c12a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056086845s STEP: Saw pod success May 27 01:02:25.584: INFO: Pod "var-expansion-e2741e5e-7425-425e-84da-a3492e10c12a" satisfied condition "Succeeded or Failed" May 27 01:02:25.587: INFO: Trying to get logs from node latest-worker pod var-expansion-e2741e5e-7425-425e-84da-a3492e10c12a container dapi-container: STEP: delete the pod May 27 01:02:25.649: INFO: Waiting for pod var-expansion-e2741e5e-7425-425e-84da-a3492e10c12a to disappear May 27 01:02:25.685: INFO: Pod var-expansion-e2741e5e-7425-425e-84da-a3492e10c12a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:02:25.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8369" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":243,"skipped":4078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:02:25.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 01:02:25.783: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 27 01:02:25.795: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:25.811: INFO: Number of nodes with available pods: 0 May 27 01:02:25.811: INFO: Node latest-worker is running more than one daemon pod May 27 01:02:26.816: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:26.820: INFO: Number of nodes with available pods: 0 May 27 01:02:26.820: INFO: Node latest-worker is running more than one daemon pod May 27 01:02:27.851: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:28.012: INFO: Number of nodes with available pods: 0 May 27 01:02:28.012: INFO: Node latest-worker is running more than one daemon pod May 27 01:02:28.948: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:28.952: INFO: Number of nodes with available pods: 0 May 27 01:02:28.952: INFO: Node latest-worker is running more than one daemon pod May 27 01:02:29.820: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:29.823: INFO: Number of nodes with available pods: 0 May 27 01:02:29.823: INFO: Node latest-worker is running more than one daemon pod May 27 01:02:30.815: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:30.817: INFO: Number of nodes with available pods: 2 May 27 01:02:30.818: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 27 01:02:30.968: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:30.968: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:30.974: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:32.002: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:32.002: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:32.007: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:32.989: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:32.989: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:32.993: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:33.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:33.980: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:33.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:34.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:34.980: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:34.980: INFO: Pod daemon-set-h764x is not available May 27 01:02:34.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:35.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:35.979: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:35.979: INFO: Pod daemon-set-h764x is not available May 27 01:02:35.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:36.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:36.979: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:36.979: INFO: Pod daemon-set-h764x is not available May 27 01:02:36.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:37.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:37.980: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:37.980: INFO: Pod daemon-set-h764x is not available May 27 01:02:38.002: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:38.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:38.980: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:38.980: INFO: Pod daemon-set-h764x is not available May 27 01:02:38.985: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:39.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:39.980: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:39.980: INFO: Pod daemon-set-h764x is not available May 27 01:02:39.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:40.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:40.979: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:40.979: INFO: Pod daemon-set-h764x is not available May 27 01:02:40.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:41.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:41.980: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:41.980: INFO: Pod daemon-set-h764x is not available May 27 01:02:41.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:42.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:42.980: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:42.980: INFO: Pod daemon-set-h764x is not available May 27 01:02:42.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:43.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:43.979: INFO: Wrong image for pod: daemon-set-h764x. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:43.979: INFO: Pod daemon-set-h764x is not available May 27 01:02:43.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:44.993: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:44.993: INFO: Pod daemon-set-s9fz6 is not available May 27 01:02:44.997: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:45.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:45.980: INFO: Pod daemon-set-s9fz6 is not available May 27 01:02:45.985: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:46.978: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:46.978: INFO: Pod daemon-set-s9fz6 is not available May 27 01:02:46.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:47.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:47.980: INFO: Pod daemon-set-s9fz6 is not available May 27 01:02:47.985: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:48.980: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:48.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:49.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:49.979: INFO: Pod daemon-set-9gxm7 is not available May 27 01:02:49.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:50.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:50.979: INFO: Pod daemon-set-9gxm7 is not available May 27 01:02:50.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:51.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:51.979: INFO: Pod daemon-set-9gxm7 is not available May 27 01:02:51.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:53.002: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:53.002: INFO: Pod daemon-set-9gxm7 is not available May 27 01:02:53.006: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:53.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:53.979: INFO: Pod daemon-set-9gxm7 is not available May 27 01:02:53.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:54.979: INFO: Wrong image for pod: daemon-set-9gxm7. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 27 01:02:54.979: INFO: Pod daemon-set-9gxm7 is not available May 27 01:02:54.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:55.980: INFO: Pod daemon-set-rgvx8 is not available May 27 01:02:55.985: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 27 01:02:55.989: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:55.992: INFO: Number of nodes with available pods: 1 May 27 01:02:55.992: INFO: Node latest-worker2 is running more than one daemon pod May 27 01:02:56.998: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:57.002: INFO: Number of nodes with available pods: 1 May 27 01:02:57.002: INFO: Node latest-worker2 is running more than one daemon pod May 27 01:02:57.998: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:02:58.001: INFO: Number of nodes with available pods: 2 May 27 01:02:58.001: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7628, will wait for the garbage collector to delete the pods May 27 01:02:58.105: INFO: Deleting DaemonSet.extensions daemon-set took: 6.701303ms May 27 01:02:58.506: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.373011ms May 27 01:03:02.009: INFO: Number of nodes with available pods: 0 May 27 01:03:02.009: INFO: Number of running nodes: 0, number of available pods: 0 May 27 01:03:02.012: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7628/daemonsets","resourceVersion":"7961102"},"items":null} May 27 01:03:02.015: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7628/pods","resourceVersion":"7961102"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:03:02.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7628" for this suite. • [SLOW TEST:36.335 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":244,"skipped":4112,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:03:02.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6441 STEP: creating a selector STEP: Creating the service pods in kubernetes May 27 01:03:02.124: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 27 01:03:02.197: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 01:03:04.336: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 01:03:06.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:03:08.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:03:10.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:03:12.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:03:14.407: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:03:16.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:03:18.202: INFO: The status of Pod netserver-0 is Running (Ready = true) May 27 01:03:18.207: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 27 01:03:22.231: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.226:8080/dial?request=hostname&protocol=http&host=10.244.1.216&port=8080&tries=1'] Namespace:pod-network-test-6441 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 01:03:22.231: INFO: >>> kubeConfig: /root/.kube/config I0527 01:03:22.260541 8 log.go:172] (0xc002b240b0) (0xc001f4b0e0) Create stream I0527 01:03:22.260566 8 log.go:172] (0xc002b240b0) (0xc001f4b0e0) Stream added, broadcasting: 1 I0527 01:03:22.262618 8 log.go:172] (0xc002b240b0) Reply frame received for 1 I0527 01:03:22.262647 8 log.go:172] (0xc002b240b0) (0xc002a760a0) Create stream I0527 01:03:22.262658 8 log.go:172] (0xc002b240b0) (0xc002a760a0) Stream added, broadcasting: 3 I0527 01:03:22.263945 8 log.go:172] (0xc002b240b0) Reply frame received for 3 I0527 01:03:22.263965 8 log.go:172] (0xc002b240b0) (0xc001f4b400) Create stream I0527 01:03:22.263972 8 log.go:172] (0xc002b240b0) (0xc001f4b400) Stream added, broadcasting: 5 I0527 01:03:22.264894 8 log.go:172] (0xc002b240b0) Reply frame received for 5 I0527 01:03:22.374104 8 log.go:172] (0xc002b240b0) Data frame received for 3 I0527 01:03:22.374133 8 log.go:172] (0xc002a760a0) (3) Data frame handling I0527 01:03:22.374153 8 log.go:172] (0xc002a760a0) (3) Data frame sent I0527 01:03:22.375171 8 log.go:172] (0xc002b240b0) Data frame received for 3 I0527 01:03:22.375202 8 log.go:172] (0xc002a760a0) (3) Data frame handling I0527 01:03:22.375224 8 log.go:172] (0xc002b240b0) Data frame received for 5 I0527 01:03:22.375235 8 log.go:172] (0xc001f4b400) (5) Data frame handling I0527 01:03:22.376951 8 log.go:172] (0xc002b240b0) Data frame received for 1 I0527 01:03:22.376986 8 log.go:172] (0xc001f4b0e0) (1) Data frame handling I0527 01:03:22.377007 8 log.go:172] (0xc001f4b0e0) (1) Data frame sent I0527 01:03:22.377039 8 log.go:172] (0xc002b240b0) (0xc001f4b0e0) Stream removed, broadcasting: 1 I0527 01:03:22.377058 8 log.go:172] (0xc002b240b0) Go away received I0527 01:03:22.377294 8 log.go:172] (0xc002b240b0) (0xc001f4b0e0) Stream removed, broadcasting: 1 I0527 01:03:22.377309 8 log.go:172] (0xc002b240b0) (0xc002a760a0) Stream removed, broadcasting: 3 I0527 01:03:22.377315 8 log.go:172] (0xc002b240b0) (0xc001f4b400) Stream removed, broadcasting: 5 May 27 01:03:22.377: INFO: Waiting for responses: map[] May 27 01:03:22.380: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.226:8080/dial?request=hostname&protocol=http&host=10.244.2.225&port=8080&tries=1'] Namespace:pod-network-test-6441 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 01:03:22.381: INFO: >>> kubeConfig: /root/.kube/config I0527 01:03:22.419574 8 log.go:172] (0xc0025cc2c0) (0xc002b1a780) Create stream I0527 01:03:22.419614 8 log.go:172] (0xc0025cc2c0) (0xc002b1a780) Stream added, broadcasting: 1 I0527 01:03:22.422020 8 log.go:172] (0xc0025cc2c0) Reply frame received for 1 I0527 01:03:22.422072 8 log.go:172] (0xc0025cc2c0) (0xc002a76140) Create stream I0527 01:03:22.422088 8 log.go:172] (0xc0025cc2c0) (0xc002a76140) Stream added, broadcasting: 3 I0527 01:03:22.423126 8 log.go:172] (0xc0025cc2c0) Reply frame received for 3 I0527 01:03:22.423177 8 log.go:172] (0xc0025cc2c0) (0xc002a76280) Create stream I0527 01:03:22.423194 8 log.go:172] (0xc0025cc2c0) (0xc002a76280) Stream added, broadcasting: 5 I0527 01:03:22.424256 8 log.go:172] (0xc0025cc2c0) Reply frame received for 5 I0527 01:03:22.510053 8 log.go:172] (0xc0025cc2c0) Data frame received for 3 I0527 01:03:22.510123 8 log.go:172] (0xc002a76140) (3) Data frame handling I0527 01:03:22.510153 8 log.go:172] (0xc002a76140) (3) Data frame sent I0527 01:03:22.510188 8 log.go:172] (0xc0025cc2c0) Data frame received for 3 I0527 01:03:22.510216 8 log.go:172] (0xc002a76140) (3) Data frame handling I0527 01:03:22.510238 8 log.go:172] (0xc0025cc2c0) Data frame received for 5 I0527 01:03:22.510255 8 log.go:172] (0xc002a76280) (5) Data frame handling I0527 01:03:22.512235 8 log.go:172] (0xc0025cc2c0) Data frame received for 1 I0527 01:03:22.512269 8 log.go:172] (0xc002b1a780) (1) Data frame handling I0527 01:03:22.512303 8 log.go:172] (0xc002b1a780) (1) Data frame sent I0527 01:03:22.512330 8 log.go:172] (0xc0025cc2c0) (0xc002b1a780) Stream removed, broadcasting: 1 I0527 01:03:22.512355 8 log.go:172] (0xc0025cc2c0) Go away received I0527 01:03:22.512497 8 log.go:172] (0xc0025cc2c0) (0xc002b1a780) Stream removed, broadcasting: 1 I0527 01:03:22.512519 8 log.go:172] (0xc0025cc2c0) (0xc002a76140) Stream removed, broadcasting: 3 I0527 01:03:22.512529 8 log.go:172] (0xc0025cc2c0) (0xc002a76280) Stream removed, broadcasting: 5 May 27 01:03:22.512: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:03:22.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6441" for this suite. • [SLOW TEST:20.509 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":245,"skipped":4125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:03:22.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-612f7219-5e49-4c61-b426-292f64aa7ce6 STEP: Creating a pod to test consume secrets May 27 01:03:22.693: INFO: Waiting up to 5m0s for pod "pod-secrets-dba50ac3-f8e8-4070-b8e0-a1e0254cad49" in namespace "secrets-9311" to be "Succeeded or Failed" May 27 01:03:22.767: INFO: Pod "pod-secrets-dba50ac3-f8e8-4070-b8e0-a1e0254cad49": Phase="Pending", Reason="", readiness=false. Elapsed: 73.883517ms May 27 01:03:24.772: INFO: Pod "pod-secrets-dba50ac3-f8e8-4070-b8e0-a1e0254cad49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078453277s May 27 01:03:26.776: INFO: Pod "pod-secrets-dba50ac3-f8e8-4070-b8e0-a1e0254cad49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083240618s STEP: Saw pod success May 27 01:03:26.776: INFO: Pod "pod-secrets-dba50ac3-f8e8-4070-b8e0-a1e0254cad49" satisfied condition "Succeeded or Failed" May 27 01:03:26.780: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-dba50ac3-f8e8-4070-b8e0-a1e0254cad49 container secret-volume-test: STEP: delete the pod May 27 01:03:26.797: INFO: Waiting for pod pod-secrets-dba50ac3-f8e8-4070-b8e0-a1e0254cad49 to disappear May 27 01:03:26.801: INFO: Pod pod-secrets-dba50ac3-f8e8-4070-b8e0-a1e0254cad49 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:03:26.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9311" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":246,"skipped":4148,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:03:26.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-d94fe103-c677-40be-b312-84aef910cd02 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:03:32.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4437" for this suite. • [SLOW TEST:6.152 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":4164,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:03:32.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 27 01:03:33.020: INFO: Waiting up to 5m0s for pod "client-containers-dee24139-d94a-4e78-ae9e-bbff3e465d8d" in namespace "containers-7396" to be "Succeeded or Failed" May 27 01:03:33.067: INFO: Pod "client-containers-dee24139-d94a-4e78-ae9e-bbff3e465d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.040645ms May 27 01:03:35.089: INFO: Pod "client-containers-dee24139-d94a-4e78-ae9e-bbff3e465d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069245543s May 27 01:03:37.094: INFO: Pod "client-containers-dee24139-d94a-4e78-ae9e-bbff3e465d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073990839s STEP: Saw pod success May 27 01:03:37.094: INFO: Pod "client-containers-dee24139-d94a-4e78-ae9e-bbff3e465d8d" satisfied condition "Succeeded or Failed" May 27 01:03:37.097: INFO: Trying to get logs from node latest-worker2 pod client-containers-dee24139-d94a-4e78-ae9e-bbff3e465d8d container test-container: STEP: delete the pod May 27 01:03:37.130: INFO: Waiting for pod client-containers-dee24139-d94a-4e78-ae9e-bbff3e465d8d to disappear May 27 01:03:37.138: INFO: Pod client-containers-dee24139-d94a-4e78-ae9e-bbff3e465d8d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:03:37.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7396" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4180,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:03:37.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 27 01:03:37.226: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 27 01:03:37.234: INFO: Waiting for terminating namespaces to be deleted... May 27 01:03:37.236: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 27 01:03:37.252: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 27 01:03:37.252: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 27 01:03:37.252: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 27 01:03:37.252: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 27 01:03:37.252: INFO: pod-configmaps-636f65c1-5902-490b-89ff-d84f83a0957d from configmap-4437 started at 2020-05-27 01:03:26 +0000 UTC (2 container statuses recorded) May 27 01:03:37.252: INFO: Container configmap-volume-binary-test ready: false, restart count 0 May 27 01:03:37.252: INFO: Container configmap-volume-data-test ready: true, restart count 0 May 27 01:03:37.252: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 27 01:03:37.252: INFO: Container kindnet-cni ready: true, restart count 2 May 27 01:03:37.252: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 27 01:03:37.252: INFO: Container kube-proxy ready: true, restart count 0 May 27 01:03:37.252: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 27 01:03:37.256: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 27 01:03:37.256: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 27 01:03:37.256: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 27 01:03:37.256: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 27 01:03:37.256: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 27 01:03:37.256: INFO: Container kindnet-cni ready: true, restart count 2 May 27 01:03:37.256: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 27 01:03:37.256: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8a3fd993-0384-4e07-a21e-e5aa617ec4cb 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-8a3fd993-0384-4e07-a21e-e5aa617ec4cb off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8a3fd993-0384-4e07-a21e-e5aa617ec4cb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:03:53.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2605" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.363 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":249,"skipped":4183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:03:53.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9753 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-9753 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9753 May 27 01:03:53.719: INFO: Found 0 stateful pods, waiting for 1 May 27 01:04:03.724: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 27 01:04:03.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9753 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 01:04:04.053: INFO: stderr: "I0527 01:04:03.889939 2975 log.go:172] (0xc000a3b6b0) (0xc00060bb80) Create stream\nI0527 01:04:03.890014 2975 log.go:172] (0xc000a3b6b0) (0xc00060bb80) Stream added, broadcasting: 1\nI0527 01:04:03.895388 2975 log.go:172] (0xc000a3b6b0) Reply frame received for 1\nI0527 01:04:03.895434 2975 log.go:172] (0xc000a3b6b0) (0xc00050c1e0) Create stream\nI0527 01:04:03.895448 2975 log.go:172] (0xc000a3b6b0) (0xc00050c1e0) Stream added, broadcasting: 3\nI0527 01:04:03.896527 2975 log.go:172] (0xc000a3b6b0) Reply frame received for 3\nI0527 01:04:03.896569 2975 log.go:172] (0xc000a3b6b0) (0xc00050d180) Create stream\nI0527 01:04:03.896581 2975 log.go:172] (0xc000a3b6b0) (0xc00050d180) Stream added, broadcasting: 5\nI0527 01:04:03.897907 2975 log.go:172] (0xc000a3b6b0) Reply frame received for 5\nI0527 01:04:03.987815 2975 log.go:172] (0xc000a3b6b0) Data frame received for 5\nI0527 01:04:03.987851 2975 log.go:172] (0xc00050d180) (5) Data frame handling\nI0527 01:04:03.987874 2975 log.go:172] (0xc00050d180) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 01:04:04.043837 2975 log.go:172] (0xc000a3b6b0) Data frame received for 3\nI0527 01:04:04.043882 2975 log.go:172] (0xc00050c1e0) (3) Data frame handling\nI0527 01:04:04.043970 2975 log.go:172] (0xc000a3b6b0) Data frame received for 5\nI0527 01:04:04.044029 2975 log.go:172] (0xc00050d180) (5) Data frame handling\nI0527 01:04:04.044062 2975 log.go:172] (0xc00050c1e0) (3) Data frame sent\nI0527 01:04:04.044303 2975 log.go:172] (0xc000a3b6b0) Data frame received for 3\nI0527 01:04:04.044319 2975 log.go:172] (0xc00050c1e0) (3) Data frame handling\nI0527 01:04:04.046667 2975 log.go:172] (0xc000a3b6b0) Data frame received for 1\nI0527 01:04:04.046684 2975 log.go:172] (0xc00060bb80) (1) Data frame handling\nI0527 01:04:04.046698 2975 log.go:172] (0xc00060bb80) (1) Data frame sent\nI0527 01:04:04.046708 2975 log.go:172] (0xc000a3b6b0) (0xc00060bb80) Stream removed, broadcasting: 1\nI0527 01:04:04.046716 2975 log.go:172] (0xc000a3b6b0) Go away received\nI0527 01:04:04.047160 2975 log.go:172] (0xc000a3b6b0) (0xc00060bb80) Stream removed, broadcasting: 1\nI0527 01:04:04.047190 2975 log.go:172] (0xc000a3b6b0) (0xc00050c1e0) Stream removed, broadcasting: 3\nI0527 01:04:04.047203 2975 log.go:172] (0xc000a3b6b0) (0xc00050d180) Stream removed, broadcasting: 5\n" May 27 01:04:04.053: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 01:04:04.053: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 01:04:04.056: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 27 01:04:14.115: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 27 01:04:14.115: INFO: Waiting for statefulset status.replicas updated to 0 May 27 01:04:14.160: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:14.160: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:14.160: INFO: May 27 01:04:14.160: INFO: StatefulSet ss has not reached scale 3, at 1 May 27 01:04:15.165: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997297116s May 27 01:04:16.302: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99202767s May 27 01:04:17.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.855336534s May 27 01:04:18.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.819363423s May 27 01:04:19.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.778554367s May 27 01:04:20.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.773178427s May 27 01:04:21.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.768067395s May 27 01:04:22.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.763413053s May 27 01:04:23.421: INFO: Verifying statefulset ss doesn't scale past 3 for another 758.368402ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9753 May 27 01:04:24.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9753 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 01:04:24.689: INFO: stderr: "I0527 01:04:24.591090 2995 log.go:172] (0xc000ac11e0) (0xc000b2a500) Create stream\nI0527 01:04:24.591177 2995 log.go:172] (0xc000ac11e0) (0xc000b2a500) Stream added, broadcasting: 1\nI0527 01:04:24.594054 2995 log.go:172] (0xc000ac11e0) Reply frame received for 1\nI0527 01:04:24.594096 2995 log.go:172] (0xc000ac11e0) (0xc000525b80) Create stream\nI0527 01:04:24.594112 2995 log.go:172] (0xc000ac11e0) (0xc000525b80) Stream added, broadcasting: 3\nI0527 01:04:24.595178 2995 log.go:172] (0xc000ac11e0) Reply frame received for 3\nI0527 01:04:24.595216 2995 log.go:172] (0xc000ac11e0) (0xc00051d5e0) Create stream\nI0527 01:04:24.595239 2995 log.go:172] (0xc000ac11e0) (0xc00051d5e0) Stream added, broadcasting: 5\nI0527 01:04:24.596263 2995 log.go:172] (0xc000ac11e0) Reply frame received for 5\nI0527 01:04:24.681706 2995 log.go:172] (0xc000ac11e0) Data frame received for 3\nI0527 01:04:24.681738 2995 log.go:172] (0xc000525b80) (3) Data frame handling\nI0527 01:04:24.681758 2995 log.go:172] (0xc000525b80) (3) Data frame sent\nI0527 01:04:24.681768 2995 log.go:172] (0xc000ac11e0) Data frame received for 3\nI0527 01:04:24.681775 2995 log.go:172] (0xc000525b80) (3) Data frame handling\nI0527 01:04:24.681881 2995 log.go:172] (0xc000ac11e0) Data frame received for 5\nI0527 01:04:24.681899 2995 log.go:172] (0xc00051d5e0) (5) Data frame handling\nI0527 01:04:24.681914 2995 log.go:172] (0xc00051d5e0) (5) Data frame sent\nI0527 01:04:24.681924 2995 log.go:172] (0xc000ac11e0) Data frame received for 5\nI0527 01:04:24.681929 2995 log.go:172] (0xc00051d5e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0527 01:04:24.683206 2995 log.go:172] (0xc000ac11e0) Data frame received for 1\nI0527 01:04:24.683226 2995 log.go:172] (0xc000b2a500) (1) Data frame handling\nI0527 01:04:24.683242 2995 log.go:172] (0xc000b2a500) (1) Data frame sent\nI0527 01:04:24.683284 2995 log.go:172] (0xc000ac11e0) (0xc000b2a500) Stream removed, broadcasting: 1\nI0527 01:04:24.683301 2995 log.go:172] (0xc000ac11e0) Go away received\nI0527 01:04:24.684223 2995 log.go:172] (0xc000ac11e0) (0xc000b2a500) Stream removed, broadcasting: 1\nI0527 01:04:24.684257 2995 log.go:172] (0xc000ac11e0) (0xc000525b80) Stream removed, broadcasting: 3\nI0527 01:04:24.684268 2995 log.go:172] (0xc000ac11e0) (0xc00051d5e0) Stream removed, broadcasting: 5\n" May 27 01:04:24.689: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 01:04:24.689: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 27 01:04:24.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9753 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 01:04:24.885: INFO: stderr: "I0527 01:04:24.811153 3015 log.go:172] (0xc000b531e0) (0xc000b565a0) Create stream\nI0527 01:04:24.811199 3015 log.go:172] (0xc000b531e0) (0xc000b565a0) Stream added, broadcasting: 1\nI0527 01:04:24.815556 3015 log.go:172] (0xc000b531e0) Reply frame received for 1\nI0527 01:04:24.815597 3015 log.go:172] (0xc000b531e0) (0xc000510140) Create stream\nI0527 01:04:24.815609 3015 log.go:172] (0xc000b531e0) (0xc000510140) Stream added, broadcasting: 3\nI0527 01:04:24.816620 3015 log.go:172] (0xc000b531e0) Reply frame received for 3\nI0527 01:04:24.816652 3015 log.go:172] (0xc000b531e0) (0xc00043cc80) Create stream\nI0527 01:04:24.816663 3015 log.go:172] (0xc000b531e0) (0xc00043cc80) Stream added, broadcasting: 5\nI0527 01:04:24.817915 3015 log.go:172] (0xc000b531e0) Reply frame received for 5\nI0527 01:04:24.878864 3015 log.go:172] (0xc000b531e0) Data frame received for 5\nI0527 01:04:24.878911 3015 log.go:172] (0xc00043cc80) (5) Data frame handling\nI0527 01:04:24.878927 3015 log.go:172] (0xc00043cc80) (5) Data frame sent\nI0527 01:04:24.878938 3015 log.go:172] (0xc000b531e0) Data frame received for 5\nI0527 01:04:24.878949 3015 log.go:172] (0xc00043cc80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0527 01:04:24.878998 3015 log.go:172] (0xc000b531e0) Data frame received for 3\nI0527 01:04:24.879026 3015 log.go:172] (0xc000510140) (3) Data frame handling\nI0527 01:04:24.879050 3015 log.go:172] (0xc000510140) (3) Data frame sent\nI0527 01:04:24.879059 3015 log.go:172] (0xc000b531e0) Data frame received for 3\nI0527 01:04:24.879065 3015 log.go:172] (0xc000510140) (3) Data frame handling\nI0527 01:04:24.880769 3015 log.go:172] (0xc000b531e0) Data frame received for 1\nI0527 01:04:24.880795 3015 log.go:172] (0xc000b565a0) (1) Data frame handling\nI0527 01:04:24.880824 3015 log.go:172] (0xc000b565a0) (1) Data frame sent\nI0527 01:04:24.880840 3015 log.go:172] (0xc000b531e0) (0xc000b565a0) Stream removed, broadcasting: 1\nI0527 01:04:24.880866 3015 log.go:172] (0xc000b531e0) Go away received\nI0527 01:04:24.881534 3015 log.go:172] (0xc000b531e0) (0xc000b565a0) Stream removed, broadcasting: 1\nI0527 01:04:24.881574 3015 log.go:172] (0xc000b531e0) (0xc000510140) Stream removed, broadcasting: 3\nI0527 01:04:24.881598 3015 log.go:172] (0xc000b531e0) (0xc00043cc80) Stream removed, broadcasting: 5\n" May 27 01:04:24.886: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 01:04:24.886: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 27 01:04:24.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9753 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 27 01:04:25.117: INFO: stderr: "I0527 01:04:25.027436 3035 log.go:172] (0xc000bb5290) (0xc000aa4320) Create stream\nI0527 01:04:25.027487 3035 log.go:172] (0xc000bb5290) (0xc000aa4320) Stream added, broadcasting: 1\nI0527 01:04:25.032742 3035 log.go:172] (0xc000bb5290) Reply frame received for 1\nI0527 01:04:25.032797 3035 log.go:172] (0xc000bb5290) (0xc00056c320) Create stream\nI0527 01:04:25.032818 3035 log.go:172] (0xc000bb5290) (0xc00056c320) Stream added, broadcasting: 3\nI0527 01:04:25.034336 3035 log.go:172] (0xc000bb5290) Reply frame received for 3\nI0527 01:04:25.034384 3035 log.go:172] (0xc000bb5290) (0xc00056d2c0) Create stream\nI0527 01:04:25.034398 3035 log.go:172] (0xc000bb5290) (0xc00056d2c0) Stream added, broadcasting: 5\nI0527 01:04:25.035347 3035 log.go:172] (0xc000bb5290) Reply frame received for 5\nI0527 01:04:25.111392 3035 log.go:172] (0xc000bb5290) Data frame received for 5\nI0527 01:04:25.111419 3035 log.go:172] (0xc00056d2c0) (5) Data frame handling\nI0527 01:04:25.111431 3035 log.go:172] (0xc00056d2c0) (5) Data frame sent\nI0527 01:04:25.111439 3035 log.go:172] (0xc000bb5290) Data frame received for 5\nI0527 01:04:25.111446 3035 log.go:172] (0xc00056d2c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0527 01:04:25.111492 3035 log.go:172] (0xc000bb5290) Data frame received for 3\nI0527 01:04:25.111512 3035 log.go:172] (0xc00056c320) (3) Data frame handling\nI0527 01:04:25.111521 3035 log.go:172] (0xc00056c320) (3) Data frame sent\nI0527 01:04:25.111528 3035 log.go:172] (0xc000bb5290) Data frame received for 3\nI0527 01:04:25.111535 3035 log.go:172] (0xc00056c320) (3) Data frame handling\nI0527 01:04:25.112887 3035 log.go:172] (0xc000bb5290) Data frame received for 1\nI0527 01:04:25.112900 3035 log.go:172] (0xc000aa4320) (1) Data frame handling\nI0527 01:04:25.112911 3035 log.go:172] (0xc000aa4320) (1) Data frame sent\nI0527 01:04:25.112919 3035 log.go:172] (0xc000bb5290) (0xc000aa4320) Stream removed, broadcasting: 1\nI0527 01:04:25.113295 3035 log.go:172] (0xc000bb5290) (0xc000aa4320) Stream removed, broadcasting: 1\nI0527 01:04:25.113311 3035 log.go:172] (0xc000bb5290) (0xc00056c320) Stream removed, broadcasting: 3\nI0527 01:04:25.113319 3035 log.go:172] (0xc000bb5290) (0xc00056d2c0) Stream removed, broadcasting: 5\n" May 27 01:04:25.117: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 27 01:04:25.117: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 27 01:04:25.147: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 27 01:04:25.147: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 27 01:04:25.147: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 27 01:04:25.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9753 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 01:04:25.371: INFO: stderr: "I0527 01:04:25.292201 3055 log.go:172] (0xc00003aa50) (0xc00083f4a0) Create stream\nI0527 01:04:25.292261 3055 log.go:172] (0xc00003aa50) (0xc00083f4a0) Stream added, broadcasting: 1\nI0527 01:04:25.304338 3055 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0527 01:04:25.304388 3055 log.go:172] (0xc00003aa50) (0xc000640d20) Create stream\nI0527 01:04:25.304400 3055 log.go:172] (0xc00003aa50) (0xc000640d20) Stream added, broadcasting: 3\nI0527 01:04:25.305780 3055 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0527 01:04:25.305815 3055 log.go:172] (0xc00003aa50) (0xc000387cc0) Create stream\nI0527 01:04:25.305830 3055 log.go:172] (0xc00003aa50) (0xc000387cc0) Stream added, broadcasting: 5\nI0527 01:04:25.309011 3055 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0527 01:04:25.364515 3055 log.go:172] (0xc00003aa50) Data frame received for 5\nI0527 01:04:25.364556 3055 log.go:172] (0xc000387cc0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 01:04:25.364580 3055 log.go:172] (0xc00003aa50) Data frame received for 3\nI0527 01:04:25.364608 3055 log.go:172] (0xc000640d20) (3) Data frame handling\nI0527 01:04:25.364617 3055 log.go:172] (0xc000640d20) (3) Data frame sent\nI0527 01:04:25.364627 3055 log.go:172] (0xc00003aa50) Data frame received for 3\nI0527 01:04:25.364640 3055 log.go:172] (0xc000640d20) (3) Data frame handling\nI0527 01:04:25.364666 3055 log.go:172] (0xc000387cc0) (5) Data frame sent\nI0527 01:04:25.364673 3055 log.go:172] (0xc00003aa50) Data frame received for 5\nI0527 01:04:25.364678 3055 log.go:172] (0xc000387cc0) (5) Data frame handling\nI0527 01:04:25.366191 3055 log.go:172] (0xc00003aa50) Data frame received for 1\nI0527 01:04:25.366223 3055 log.go:172] (0xc00083f4a0) (1) Data frame handling\nI0527 01:04:25.366242 3055 log.go:172] (0xc00083f4a0) (1) Data frame sent\nI0527 01:04:25.366265 3055 log.go:172] (0xc00003aa50) (0xc00083f4a0) Stream removed, broadcasting: 1\nI0527 01:04:25.366293 3055 log.go:172] (0xc00003aa50) Go away received\nI0527 01:04:25.366568 3055 log.go:172] (0xc00003aa50) (0xc00083f4a0) Stream removed, broadcasting: 1\nI0527 01:04:25.366582 3055 log.go:172] (0xc00003aa50) (0xc000640d20) Stream removed, broadcasting: 3\nI0527 01:04:25.366592 3055 log.go:172] (0xc00003aa50) (0xc000387cc0) Stream removed, broadcasting: 5\n" May 27 01:04:25.371: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 01:04:25.371: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 01:04:25.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9753 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 01:04:25.627: INFO: stderr: "I0527 01:04:25.509024 3077 log.go:172] (0xc0009860b0) (0xc0005001e0) Create stream\nI0527 01:04:25.509430 3077 log.go:172] (0xc0009860b0) (0xc0005001e0) Stream added, broadcasting: 1\nI0527 01:04:25.514152 3077 log.go:172] (0xc0009860b0) Reply frame received for 1\nI0527 01:04:25.514313 3077 log.go:172] (0xc0009860b0) (0xc0003cad20) Create stream\nI0527 01:04:25.514368 3077 log.go:172] (0xc0009860b0) (0xc0003cad20) Stream added, broadcasting: 3\nI0527 01:04:25.516257 3077 log.go:172] (0xc0009860b0) Reply frame received for 3\nI0527 01:04:25.516318 3077 log.go:172] (0xc0009860b0) (0xc000238f00) Create stream\nI0527 01:04:25.516358 3077 log.go:172] (0xc0009860b0) (0xc000238f00) Stream added, broadcasting: 5\nI0527 01:04:25.519348 3077 log.go:172] (0xc0009860b0) Reply frame received for 5\nI0527 01:04:25.572694 3077 log.go:172] (0xc0009860b0) Data frame received for 5\nI0527 01:04:25.572721 3077 log.go:172] (0xc000238f00) (5) Data frame handling\nI0527 01:04:25.572736 3077 log.go:172] (0xc000238f00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 01:04:25.618270 3077 log.go:172] (0xc0009860b0) Data frame received for 3\nI0527 01:04:25.618291 3077 log.go:172] (0xc0003cad20) (3) Data frame handling\nI0527 01:04:25.618306 3077 log.go:172] (0xc0003cad20) (3) Data frame sent\nI0527 01:04:25.618314 3077 log.go:172] (0xc0009860b0) Data frame received for 3\nI0527 01:04:25.618321 3077 log.go:172] (0xc0003cad20) (3) Data frame handling\nI0527 01:04:25.618388 3077 log.go:172] (0xc0009860b0) Data frame received for 5\nI0527 01:04:25.618399 3077 log.go:172] (0xc000238f00) (5) Data frame handling\nI0527 01:04:25.620999 3077 log.go:172] (0xc0009860b0) Data frame received for 1\nI0527 01:04:25.621032 3077 log.go:172] (0xc0005001e0) (1) Data frame handling\nI0527 01:04:25.621053 3077 log.go:172] (0xc0005001e0) (1) Data frame sent\nI0527 01:04:25.621076 3077 log.go:172] (0xc0009860b0) (0xc0005001e0) Stream removed, broadcasting: 1\nI0527 01:04:25.621096 3077 log.go:172] (0xc0009860b0) Go away received\nI0527 01:04:25.621613 3077 log.go:172] (0xc0009860b0) (0xc0005001e0) Stream removed, broadcasting: 1\nI0527 01:04:25.621640 3077 log.go:172] (0xc0009860b0) (0xc0003cad20) Stream removed, broadcasting: 3\nI0527 01:04:25.621653 3077 log.go:172] (0xc0009860b0) (0xc000238f00) Stream removed, broadcasting: 5\n" May 27 01:04:25.628: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 01:04:25.628: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 01:04:25.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9753 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 27 01:04:25.853: INFO: stderr: "I0527 01:04:25.758889 3099 log.go:172] (0xc000ab0370) (0xc000631400) Create stream\nI0527 01:04:25.758938 3099 log.go:172] (0xc000ab0370) (0xc000631400) Stream added, broadcasting: 1\nI0527 01:04:25.761575 3099 log.go:172] (0xc000ab0370) Reply frame received for 1\nI0527 01:04:25.761644 3099 log.go:172] (0xc000ab0370) (0xc000302140) Create stream\nI0527 01:04:25.761665 3099 log.go:172] (0xc000ab0370) (0xc000302140) Stream added, broadcasting: 3\nI0527 01:04:25.762722 3099 log.go:172] (0xc000ab0370) Reply frame received for 3\nI0527 01:04:25.762764 3099 log.go:172] (0xc000ab0370) (0xc000139540) Create stream\nI0527 01:04:25.762779 3099 log.go:172] (0xc000ab0370) (0xc000139540) Stream added, broadcasting: 5\nI0527 01:04:25.763790 3099 log.go:172] (0xc000ab0370) Reply frame received for 5\nI0527 01:04:25.818910 3099 log.go:172] (0xc000ab0370) Data frame received for 5\nI0527 01:04:25.818938 3099 log.go:172] (0xc000139540) (5) Data frame handling\nI0527 01:04:25.818960 3099 log.go:172] (0xc000139540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0527 01:04:25.846164 3099 log.go:172] (0xc000ab0370) Data frame received for 5\nI0527 01:04:25.846182 3099 log.go:172] (0xc000139540) (5) Data frame handling\nI0527 01:04:25.846200 3099 log.go:172] (0xc000ab0370) Data frame received for 3\nI0527 01:04:25.846224 3099 log.go:172] (0xc000302140) (3) Data frame handling\nI0527 01:04:25.846248 3099 log.go:172] (0xc000302140) (3) Data frame sent\nI0527 01:04:25.846267 3099 log.go:172] (0xc000ab0370) Data frame received for 3\nI0527 01:04:25.846281 3099 log.go:172] (0xc000302140) (3) Data frame handling\nI0527 01:04:25.847308 3099 log.go:172] (0xc000ab0370) Data frame received for 1\nI0527 01:04:25.847326 3099 log.go:172] (0xc000631400) (1) Data frame handling\nI0527 01:04:25.847348 3099 log.go:172] (0xc000631400) (1) Data frame sent\nI0527 01:04:25.847367 3099 log.go:172] (0xc000ab0370) (0xc000631400) Stream removed, broadcasting: 1\nI0527 01:04:25.847387 3099 log.go:172] (0xc000ab0370) Go away received\nI0527 01:04:25.847681 3099 log.go:172] (0xc000ab0370) (0xc000631400) Stream removed, broadcasting: 1\nI0527 01:04:25.847699 3099 log.go:172] (0xc000ab0370) (0xc000302140) Stream removed, broadcasting: 3\nI0527 01:04:25.847708 3099 log.go:172] (0xc000ab0370) (0xc000139540) Stream removed, broadcasting: 5\n" May 27 01:04:25.853: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 27 01:04:25.853: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 27 01:04:25.853: INFO: Waiting for statefulset status.replicas updated to 0 May 27 01:04:25.856: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 27 01:04:35.865: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 27 01:04:35.865: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 27 01:04:35.865: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 27 01:04:35.883: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:35.883: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:35.883: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:35.883: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:35.883: INFO: May 27 01:04:35.883: INFO: StatefulSet ss has not reached scale 0, at 3 May 27 01:04:36.888: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:36.888: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:36.888: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:36.889: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:36.889: INFO: May 27 01:04:36.889: INFO: StatefulSet ss has not reached scale 0, at 3 May 27 01:04:39.142: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:39.142: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:39.142: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:39.142: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:39.142: INFO: May 27 01:04:39.142: INFO: StatefulSet ss has not reached scale 0, at 3 May 27 01:04:40.149: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:40.149: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:40.149: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:40.149: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:40.149: INFO: May 27 01:04:40.149: INFO: StatefulSet ss has not reached scale 0, at 3 May 27 01:04:41.155: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:41.155: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:41.155: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:41.155: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:41.155: INFO: May 27 01:04:41.155: INFO: StatefulSet ss has not reached scale 0, at 3 May 27 01:04:42.161: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:42.161: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:42.161: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:42.161: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:42.161: INFO: May 27 01:04:42.161: INFO: StatefulSet ss has not reached scale 0, at 3 May 27 01:04:43.167: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:43.167: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:43.167: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:43.167: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:43.167: INFO: May 27 01:04:43.167: INFO: StatefulSet ss has not reached scale 0, at 3 May 27 01:04:44.176: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:44.176: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:03:53 +0000 UTC }] May 27 01:04:44.176: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:44.176: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:44.176: INFO: May 27 01:04:44.176: INFO: StatefulSet ss has not reached scale 0, at 3 May 27 01:04:45.181: INFO: POD NODE PHASE GRACE CONDITIONS May 27 01:04:45.181: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-27 01:04:14 +0000 UTC }] May 27 01:04:45.181: INFO: May 27 01:04:45.181: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9753 May 27 01:04:46.185: INFO: Scaling statefulset ss to 0 May 27 01:04:46.197: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 27 01:04:46.200: INFO: Deleting all statefulset in ns statefulset-9753 May 27 01:04:46.203: INFO: Scaling statefulset ss to 0 May 27 01:04:46.212: INFO: Waiting for statefulset status.replicas updated to 0 May 27 01:04:46.215: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:04:46.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9753" for this suite. • [SLOW TEST:52.730 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":250,"skipped":4212,"failed":0} [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:04:46.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-3111 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3111 to expose endpoints map[] May 27 01:04:46.362: INFO: Get endpoints failed (10.391345ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 27 01:04:47.369: INFO: successfully validated that service endpoint-test2 in namespace services-3111 exposes endpoints map[] (1.016656165s elapsed) STEP: Creating pod pod1 in namespace services-3111 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3111 to expose endpoints map[pod1:[80]] May 27 01:04:51.746: INFO: successfully validated that service endpoint-test2 in namespace services-3111 exposes endpoints map[pod1:[80]] (4.370590709s elapsed) STEP: Creating pod pod2 in namespace services-3111 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3111 to expose endpoints map[pod1:[80] pod2:[80]] May 27 01:04:55.915: INFO: successfully validated that service endpoint-test2 in namespace services-3111 exposes endpoints map[pod1:[80] pod2:[80]] (4.163137572s elapsed) STEP: Deleting pod pod1 in namespace services-3111 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3111 to expose endpoints map[pod2:[80]] May 27 01:04:56.951: INFO: successfully validated that service endpoint-test2 in namespace services-3111 exposes endpoints map[pod2:[80]] (1.031472724s elapsed) STEP: Deleting pod pod2 in namespace services-3111 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3111 to expose endpoints map[] May 27 01:04:57.965: INFO: successfully validated that service endpoint-test2 in namespace services-3111 exposes endpoints map[] (1.008315039s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:04:58.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3111" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.140 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":251,"skipped":4212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:04:58.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-d5qx STEP: Creating a pod to test atomic-volume-subpath May 27 01:04:58.502: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d5qx" in namespace "subpath-3191" to be "Succeeded or Failed" May 27 01:04:58.543: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Pending", Reason="", readiness=false. Elapsed: 41.317593ms May 27 01:05:00.548: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045989681s May 27 01:05:02.552: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 4.050248026s May 27 01:05:04.556: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 6.054484254s May 27 01:05:06.561: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 8.059275422s May 27 01:05:08.565: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 10.063063672s May 27 01:05:10.569: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 12.067648294s May 27 01:05:12.574: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 14.071943058s May 27 01:05:14.578: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 16.075822935s May 27 01:05:16.583: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 18.080931986s May 27 01:05:18.588: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 20.085788907s May 27 01:05:20.592: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Running", Reason="", readiness=true. Elapsed: 22.09059931s May 27 01:05:22.598: INFO: Pod "pod-subpath-test-projected-d5qx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.09618989s STEP: Saw pod success May 27 01:05:22.598: INFO: Pod "pod-subpath-test-projected-d5qx" satisfied condition "Succeeded or Failed" May 27 01:05:22.602: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-d5qx container test-container-subpath-projected-d5qx: STEP: delete the pod May 27 01:05:22.652: INFO: Waiting for pod pod-subpath-test-projected-d5qx to disappear May 27 01:05:22.690: INFO: Pod pod-subpath-test-projected-d5qx no longer exists STEP: Deleting pod pod-subpath-test-projected-d5qx May 27 01:05:22.690: INFO: Deleting pod "pod-subpath-test-projected-d5qx" in namespace "subpath-3191" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:05:22.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3191" for this suite. • [SLOW TEST:24.325 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":252,"skipped":4243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:05:22.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 27 01:05:22.856: INFO: Waiting up to 1m0s for all nodes to be ready May 27 01:06:22.881: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:06:22.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 27 01:06:27.083: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 01:06:43.505: INFO: pods created so far: [1 1 1] May 27 01:06:43.505: INFO: length of pods created so far: 3 May 27 01:06:59.513: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:07:06.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6014" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:07:06.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3539" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:103.938 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":253,"skipped":4315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:07:06.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-pc6j STEP: Creating a pod to test atomic-volume-subpath May 27 01:07:06.800: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pc6j" in namespace "subpath-8918" to be "Succeeded or Failed" May 27 01:07:06.860: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Pending", Reason="", readiness=false. Elapsed: 59.93049ms May 27 01:07:08.864: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064385592s May 27 01:07:10.869: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 4.069562271s May 27 01:07:12.912: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 6.112146259s May 27 01:07:14.915: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 8.115680192s May 27 01:07:16.920: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 10.120378939s May 27 01:07:18.925: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 12.12504038s May 27 01:07:21.219: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 14.418927366s May 27 01:07:23.229: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 16.428782546s May 27 01:07:25.233: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 18.433642507s May 27 01:07:27.238: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 20.438064177s May 27 01:07:29.242: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Running", Reason="", readiness=true. Elapsed: 22.442337449s May 27 01:07:31.247: INFO: Pod "pod-subpath-test-secret-pc6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.446878589s STEP: Saw pod success May 27 01:07:31.247: INFO: Pod "pod-subpath-test-secret-pc6j" satisfied condition "Succeeded or Failed" May 27 01:07:31.250: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-pc6j container test-container-subpath-secret-pc6j: STEP: delete the pod May 27 01:07:31.316: INFO: Waiting for pod pod-subpath-test-secret-pc6j to disappear May 27 01:07:31.344: INFO: Pod pod-subpath-test-secret-pc6j no longer exists STEP: Deleting pod pod-subpath-test-secret-pc6j May 27 01:07:31.344: INFO: Deleting pod "pod-subpath-test-secret-pc6j" in namespace "subpath-8918" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:07:31.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8918" for this suite. • [SLOW TEST:24.736 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":254,"skipped":4385,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:07:31.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6549 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6549 I0527 01:07:31.577252 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6549, replica count: 2 I0527 01:07:34.627693 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 01:07:37.627977 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 01:07:37.628: INFO: Creating new exec pod May 27 01:07:42.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6549 execpodt6knr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 27 01:07:45.673: INFO: stderr: "I0527 01:07:45.572959 3121 log.go:172] (0xc000d56370) (0xc00082e6e0) Create stream\nI0527 01:07:45.572996 3121 log.go:172] (0xc000d56370) (0xc00082e6e0) Stream added, broadcasting: 1\nI0527 01:07:45.576020 3121 log.go:172] (0xc000d56370) Reply frame received for 1\nI0527 01:07:45.576059 3121 log.go:172] (0xc000d56370) (0xc00082f040) Create stream\nI0527 01:07:45.576071 3121 log.go:172] (0xc000d56370) (0xc00082f040) Stream added, broadcasting: 3\nI0527 01:07:45.577061 3121 log.go:172] (0xc000d56370) Reply frame received for 3\nI0527 01:07:45.577106 3121 log.go:172] (0xc000d56370) (0xc000820b40) Create stream\nI0527 01:07:45.577356 3121 log.go:172] (0xc000d56370) (0xc000820b40) Stream added, broadcasting: 5\nI0527 01:07:45.578431 3121 log.go:172] (0xc000d56370) Reply frame received for 5\nI0527 01:07:45.645790 3121 log.go:172] (0xc000d56370) Data frame received for 5\nI0527 01:07:45.645809 3121 log.go:172] (0xc000820b40) (5) Data frame handling\nI0527 01:07:45.645820 3121 log.go:172] (0xc000820b40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0527 01:07:45.664868 3121 log.go:172] (0xc000d56370) Data frame received for 5\nI0527 01:07:45.664896 3121 log.go:172] (0xc000820b40) (5) Data frame handling\nI0527 01:07:45.664977 3121 log.go:172] (0xc000820b40) (5) Data frame sent\nI0527 01:07:45.664989 3121 log.go:172] (0xc000d56370) Data frame received for 5\nI0527 01:07:45.664993 3121 log.go:172] (0xc000820b40) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0527 01:07:45.665348 3121 log.go:172] (0xc000d56370) Data frame received for 3\nI0527 01:07:45.665384 3121 log.go:172] (0xc00082f040) (3) Data frame handling\nI0527 01:07:45.667458 3121 log.go:172] (0xc000d56370) Data frame received for 1\nI0527 01:07:45.667483 3121 log.go:172] (0xc00082e6e0) (1) Data frame handling\nI0527 01:07:45.667498 3121 log.go:172] (0xc00082e6e0) (1) Data frame sent\nI0527 01:07:45.667525 3121 log.go:172] (0xc000d56370) (0xc00082e6e0) Stream removed, broadcasting: 1\nI0527 01:07:45.667705 3121 log.go:172] (0xc000d56370) Go away received\nI0527 01:07:45.668085 3121 log.go:172] (0xc000d56370) (0xc00082e6e0) Stream removed, broadcasting: 1\nI0527 01:07:45.668113 3121 log.go:172] (0xc000d56370) (0xc00082f040) Stream removed, broadcasting: 3\nI0527 01:07:45.668125 3121 log.go:172] (0xc000d56370) (0xc000820b40) Stream removed, broadcasting: 5\n" May 27 01:07:45.673: INFO: stdout: "" May 27 01:07:45.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6549 execpodt6knr -- /bin/sh -x -c nc -zv -t -w 2 10.104.154.160 80' May 27 01:07:45.898: INFO: stderr: "I0527 01:07:45.831228 3155 log.go:172] (0xc0009d5290) (0xc0009aa5a0) Create stream\nI0527 01:07:45.831281 3155 log.go:172] (0xc0009d5290) (0xc0009aa5a0) Stream added, broadcasting: 1\nI0527 01:07:45.836497 3155 log.go:172] (0xc0009d5290) Reply frame received for 1\nI0527 01:07:45.836535 3155 log.go:172] (0xc0009d5290) (0xc000668280) Create stream\nI0527 01:07:45.836548 3155 log.go:172] (0xc0009d5290) (0xc000668280) Stream added, broadcasting: 3\nI0527 01:07:45.837716 3155 log.go:172] (0xc0009d5290) Reply frame received for 3\nI0527 01:07:45.837754 3155 log.go:172] (0xc0009d5290) (0xc000669220) Create stream\nI0527 01:07:45.837764 3155 log.go:172] (0xc0009d5290) (0xc000669220) Stream added, broadcasting: 5\nI0527 01:07:45.838754 3155 log.go:172] (0xc0009d5290) Reply frame received for 5\nI0527 01:07:45.890587 3155 log.go:172] (0xc0009d5290) Data frame received for 5\nI0527 01:07:45.890622 3155 log.go:172] (0xc000669220) (5) Data frame handling\nI0527 01:07:45.890643 3155 log.go:172] (0xc000669220) (5) Data frame sent\nI0527 01:07:45.890665 3155 log.go:172] (0xc0009d5290) Data frame received for 5\nI0527 01:07:45.890683 3155 log.go:172] (0xc000669220) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.154.160 80\nConnection to 10.104.154.160 80 port [tcp/http] succeeded!\nI0527 01:07:45.890741 3155 log.go:172] (0xc0009d5290) Data frame received for 3\nI0527 01:07:45.890791 3155 log.go:172] (0xc000668280) (3) Data frame handling\nI0527 01:07:45.892082 3155 log.go:172] (0xc0009d5290) Data frame received for 1\nI0527 01:07:45.892098 3155 log.go:172] (0xc0009aa5a0) (1) Data frame handling\nI0527 01:07:45.892106 3155 log.go:172] (0xc0009aa5a0) (1) Data frame sent\nI0527 01:07:45.892115 3155 log.go:172] (0xc0009d5290) (0xc0009aa5a0) Stream removed, broadcasting: 1\nI0527 01:07:45.892131 3155 log.go:172] (0xc0009d5290) Go away received\nI0527 01:07:45.892389 3155 log.go:172] (0xc0009d5290) (0xc0009aa5a0) Stream removed, broadcasting: 1\nI0527 01:07:45.892406 3155 log.go:172] (0xc0009d5290) (0xc000668280) Stream removed, broadcasting: 3\nI0527 01:07:45.892413 3155 log.go:172] (0xc0009d5290) (0xc000669220) Stream removed, broadcasting: 5\n" May 27 01:07:45.898: INFO: stdout: "" May 27 01:07:45.898: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:07:45.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6549" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.558 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":255,"skipped":4393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:07:45.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 27 01:07:46.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2349' May 27 01:07:46.413: INFO: stderr: "" May 27 01:07:46.413: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 27 01:07:46.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2349' May 27 01:07:46.644: INFO: stderr: "" May 27 01:07:46.644: INFO: stdout: "update-demo-nautilus-6mq5r update-demo-nautilus-jlfw6 " May 27 01:07:46.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6mq5r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2349' May 27 01:07:46.734: INFO: stderr: "" May 27 01:07:46.734: INFO: stdout: "" May 27 01:07:46.734: INFO: update-demo-nautilus-6mq5r is created but not running May 27 01:07:51.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2349' May 27 01:07:51.834: INFO: stderr: "" May 27 01:07:51.834: INFO: stdout: "update-demo-nautilus-6mq5r update-demo-nautilus-jlfw6 " May 27 01:07:51.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6mq5r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2349' May 27 01:07:52.025: INFO: stderr: "" May 27 01:07:52.026: INFO: stdout: "true" May 27 01:07:52.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6mq5r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2349' May 27 01:07:52.166: INFO: stderr: "" May 27 01:07:52.166: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 27 01:07:52.166: INFO: validating pod update-demo-nautilus-6mq5r May 27 01:07:52.192: INFO: got data: { "image": "nautilus.jpg" } May 27 01:07:52.192: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 27 01:07:52.192: INFO: update-demo-nautilus-6mq5r is verified up and running May 27 01:07:52.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jlfw6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2349' May 27 01:07:52.291: INFO: stderr: "" May 27 01:07:52.291: INFO: stdout: "true" May 27 01:07:52.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jlfw6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2349' May 27 01:07:52.386: INFO: stderr: "" May 27 01:07:52.386: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 27 01:07:52.386: INFO: validating pod update-demo-nautilus-jlfw6 May 27 01:07:52.407: INFO: got data: { "image": "nautilus.jpg" } May 27 01:07:52.407: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 27 01:07:52.407: INFO: update-demo-nautilus-jlfw6 is verified up and running STEP: using delete to clean up resources May 27 01:07:52.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2349' May 27 01:07:52.525: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 27 01:07:52.525: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 27 01:07:52.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2349' May 27 01:07:52.640: INFO: stderr: "No resources found in kubectl-2349 namespace.\n" May 27 01:07:52.640: INFO: stdout: "" May 27 01:07:52.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2349 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 27 01:07:52.743: INFO: stderr: "" May 27 01:07:52.743: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:07:52.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2349" for this suite. • [SLOW TEST:6.811 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":256,"skipped":4416,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:07:52.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ea324ae8-ee2d-4f76-8c7f-7f482a976788 STEP: Creating a pod to test consume configMaps May 27 01:07:53.395: INFO: Waiting up to 5m0s for pod "pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8" in namespace "configmap-3960" to be "Succeeded or Failed" May 27 01:07:53.415: INFO: Pod "pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.972682ms May 27 01:07:55.419: INFO: Pod "pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024145761s May 27 01:07:57.423: INFO: Pod "pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8": Phase="Running", Reason="", readiness=true. Elapsed: 4.027727624s May 27 01:07:59.441: INFO: Pod "pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045702781s STEP: Saw pod success May 27 01:07:59.441: INFO: Pod "pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8" satisfied condition "Succeeded or Failed" May 27 01:07:59.444: INFO: Trying to get logs from node latest-worker pod pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8 container configmap-volume-test: STEP: delete the pod May 27 01:07:59.477: INFO: Waiting for pod pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8 to disappear May 27 01:07:59.498: INFO: Pod pod-configmaps-28325658-5389-49a2-ab0f-72edb1e5e8c8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:07:59.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3960" for this suite. • [SLOW TEST:6.756 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:07:59.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 27 01:07:59.585: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:08:16.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6980" for this suite. • [SLOW TEST:16.707 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4455,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:08:16.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 27 01:08:16.297: INFO: namespace kubectl-1758 May 27 01:08:16.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1758' May 27 01:08:16.573: INFO: stderr: "" May 27 01:08:16.573: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 27 01:08:17.577: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:17.577: INFO: Found 0 / 1 May 27 01:08:18.587: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:18.587: INFO: Found 0 / 1 May 27 01:08:19.579: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:19.579: INFO: Found 0 / 1 May 27 01:08:20.578: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:20.578: INFO: Found 1 / 1 May 27 01:08:20.578: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 27 01:08:20.582: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:20.582: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 27 01:08:20.582: INFO: wait on agnhost-master startup in kubectl-1758 May 27 01:08:20.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-g5rhz agnhost-master --namespace=kubectl-1758' May 27 01:08:20.739: INFO: stderr: "" May 27 01:08:20.739: INFO: stdout: "Paused\n" STEP: exposing RC May 27 01:08:20.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1758' May 27 01:08:20.892: INFO: stderr: "" May 27 01:08:20.892: INFO: stdout: "service/rm2 exposed\n" May 27 01:08:20.896: INFO: Service rm2 in namespace kubectl-1758 found. STEP: exposing service May 27 01:08:22.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1758' May 27 01:08:23.040: INFO: stderr: "" May 27 01:08:23.040: INFO: stdout: "service/rm3 exposed\n" May 27 01:08:23.052: INFO: Service rm3 in namespace kubectl-1758 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:08:25.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1758" for this suite. • [SLOW TEST:8.853 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":259,"skipped":4470,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:08:25.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 27 01:08:25.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8575' May 27 01:08:25.401: INFO: stderr: "" May 27 01:08:25.401: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 27 01:08:26.406: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:26.406: INFO: Found 0 / 1 May 27 01:08:27.406: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:27.406: INFO: Found 0 / 1 May 27 01:08:28.406: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:28.406: INFO: Found 0 / 1 May 27 01:08:29.406: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:29.406: INFO: Found 1 / 1 May 27 01:08:29.406: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 27 01:08:29.410: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:29.410: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 27 01:08:29.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-zj6x6 --namespace=kubectl-8575 -p {"metadata":{"annotations":{"x":"y"}}}' May 27 01:08:29.526: INFO: stderr: "" May 27 01:08:29.526: INFO: stdout: "pod/agnhost-master-zj6x6 patched\n" STEP: checking annotations May 27 01:08:29.543: INFO: Selector matched 1 pods for map[app:agnhost] May 27 01:08:29.543: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:08:29.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8575" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":260,"skipped":4477,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:08:29.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 27 01:08:29.608: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:08:43.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2497" for this suite. • [SLOW TEST:13.559 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":261,"skipped":4489,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:08:43.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 27 01:08:43.235: INFO: Waiting up to 5m0s for pod "var-expansion-464dbb12-076f-4af1-a5d3-dde13b6633b6" in namespace "var-expansion-6134" to be "Succeeded or Failed" May 27 01:08:43.269: INFO: Pod "var-expansion-464dbb12-076f-4af1-a5d3-dde13b6633b6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.172887ms May 27 01:08:45.273: INFO: Pod "var-expansion-464dbb12-076f-4af1-a5d3-dde13b6633b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038163221s May 27 01:08:47.277: INFO: Pod "var-expansion-464dbb12-076f-4af1-a5d3-dde13b6633b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042031179s STEP: Saw pod success May 27 01:08:47.277: INFO: Pod "var-expansion-464dbb12-076f-4af1-a5d3-dde13b6633b6" satisfied condition "Succeeded or Failed" May 27 01:08:47.279: INFO: Trying to get logs from node latest-worker pod var-expansion-464dbb12-076f-4af1-a5d3-dde13b6633b6 container dapi-container: STEP: delete the pod May 27 01:08:47.319: INFO: Waiting for pod var-expansion-464dbb12-076f-4af1-a5d3-dde13b6633b6 to disappear May 27 01:08:47.329: INFO: Pod var-expansion-464dbb12-076f-4af1-a5d3-dde13b6633b6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:08:47.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6134" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":262,"skipped":4510,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:08:47.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 27 01:08:47.486: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 27 01:08:47.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2698' May 27 01:08:47.847: INFO: stderr: "" May 27 01:08:47.847: INFO: stdout: "service/agnhost-slave created\n" May 27 01:08:47.848: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 27 01:08:47.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2698' May 27 01:08:48.145: INFO: stderr: "" May 27 01:08:48.145: INFO: stdout: "service/agnhost-master created\n" May 27 01:08:48.145: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 27 01:08:48.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2698' May 27 01:08:48.505: INFO: stderr: "" May 27 01:08:48.506: INFO: stdout: "service/frontend created\n" May 27 01:08:48.506: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 27 01:08:48.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2698' May 27 01:08:48.766: INFO: stderr: "" May 27 01:08:48.766: INFO: stdout: "deployment.apps/frontend created\n" May 27 01:08:48.766: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 27 01:08:48.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2698' May 27 01:08:49.098: INFO: stderr: "" May 27 01:08:49.098: INFO: stdout: "deployment.apps/agnhost-master created\n" May 27 01:08:49.098: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 27 01:08:49.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2698' May 27 01:08:49.482: INFO: stderr: "" May 27 01:08:49.482: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 27 01:08:49.482: INFO: Waiting for all frontend pods to be Running. May 27 01:08:59.533: INFO: Waiting for frontend to serve content. May 27 01:08:59.567: INFO: Trying to add a new entry to the guestbook. May 27 01:08:59.584: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 27 01:08:59.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2698' May 27 01:08:59.779: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 27 01:08:59.779: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 27 01:08:59.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2698' May 27 01:08:59.968: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 27 01:08:59.968: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 27 01:08:59.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2698' May 27 01:09:00.159: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 27 01:09:00.159: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 27 01:09:00.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2698' May 27 01:09:00.265: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 27 01:09:00.265: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 27 01:09:00.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2698' May 27 01:09:00.414: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 27 01:09:00.414: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 27 01:09:00.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2698' May 27 01:09:00.601: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 27 01:09:00.601: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:09:00.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2698" for this suite. • [SLOW TEST:13.638 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":263,"skipped":4524,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:09:00.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 27 01:09:03.274: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 27 01:09:05.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138543, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138543, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138543, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138543, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 27 01:09:08.318: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 01:09:08.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7943-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:09:09.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5677" for this suite. STEP: Destroying namespace "webhook-5677-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.553 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":264,"skipped":4538,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:09:09.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:09:13.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-65" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":265,"skipped":4548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:09:13.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 27 01:09:13.737: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:13.757: INFO: Number of nodes with available pods: 0 May 27 01:09:13.757: INFO: Node latest-worker is running more than one daemon pod May 27 01:09:14.762: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:14.794: INFO: Number of nodes with available pods: 0 May 27 01:09:14.794: INFO: Node latest-worker is running more than one daemon pod May 27 01:09:15.783: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:16.000: INFO: Number of nodes with available pods: 0 May 27 01:09:16.000: INFO: Node latest-worker is running more than one daemon pod May 27 01:09:16.762: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:16.903: INFO: Number of nodes with available pods: 0 May 27 01:09:16.903: INFO: Node latest-worker is running more than one daemon pod May 27 01:09:17.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:17.776: INFO: Number of nodes with available pods: 0 May 27 01:09:17.776: INFO: Node latest-worker is running more than one daemon pod May 27 01:09:19.119: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:19.129: INFO: Number of nodes with available pods: 2 May 27 01:09:19.129: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 27 01:09:19.325: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:19.352: INFO: Number of nodes with available pods: 1 May 27 01:09:19.352: INFO: Node latest-worker2 is running more than one daemon pod May 27 01:09:20.357: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:20.360: INFO: Number of nodes with available pods: 1 May 27 01:09:20.360: INFO: Node latest-worker2 is running more than one daemon pod May 27 01:09:21.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:21.594: INFO: Number of nodes with available pods: 1 May 27 01:09:21.594: INFO: Node latest-worker2 is running more than one daemon pod May 27 01:09:22.358: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:22.363: INFO: Number of nodes with available pods: 1 May 27 01:09:22.363: INFO: Node latest-worker2 is running more than one daemon pod May 27 01:09:23.359: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:09:23.363: INFO: Number of nodes with available pods: 2 May 27 01:09:23.363: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1705, will wait for the garbage collector to delete the pods May 27 01:09:23.427: INFO: Deleting DaemonSet.extensions daemon-set took: 6.167559ms May 27 01:09:23.528: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.185387ms May 27 01:09:35.331: INFO: Number of nodes with available pods: 0 May 27 01:09:35.331: INFO: Number of running nodes: 0, number of available pods: 0 May 27 01:09:35.334: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1705/daemonsets","resourceVersion":"7963667"},"items":null} May 27 01:09:35.337: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1705/pods","resourceVersion":"7963667"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:09:35.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1705" for this suite. • [SLOW TEST:21.700 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":266,"skipped":4603,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:09:35.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-362be63e-09c9-4594-a940-229ed53831c8 STEP: Creating a pod to test consume configMaps May 27 01:09:35.540: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f" in namespace "projected-8533" to be "Succeeded or Failed" May 27 01:09:35.563: INFO: Pod "pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.77078ms May 27 01:09:37.567: INFO: Pod "pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026815371s May 27 01:09:39.571: INFO: Pod "pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031283537s May 27 01:09:41.603: INFO: Pod "pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063256779s STEP: Saw pod success May 27 01:09:41.603: INFO: Pod "pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f" satisfied condition "Succeeded or Failed" May 27 01:09:41.606: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f container projected-configmap-volume-test: STEP: delete the pod May 27 01:09:41.663: INFO: Waiting for pod pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f to disappear May 27 01:09:41.690: INFO: Pod pod-projected-configmaps-cd73ec6e-664f-4ef1-a43e-ae30eba0e38f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:09:41.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8533" for this suite. • [SLOW TEST:6.345 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4603,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:09:41.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6923 STEP: creating service affinity-nodeport-transition in namespace services-6923 STEP: creating replication controller affinity-nodeport-transition in namespace services-6923 I0527 01:09:41.931104 8 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6923, replica count: 3 I0527 01:09:44.981540 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 01:09:47.981804 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 01:09:50.982060 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 01:09:50.994: INFO: Creating new exec pod May 27 01:09:56.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6923 execpod-affinity6bqlp -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 27 01:09:56.346: INFO: stderr: "I0527 01:09:56.236279 3769 log.go:172] (0xc00099dad0) (0xc000af4640) Create stream\nI0527 01:09:56.236332 3769 log.go:172] (0xc00099dad0) (0xc000af4640) Stream added, broadcasting: 1\nI0527 01:09:56.240800 3769 log.go:172] (0xc00099dad0) Reply frame received for 1\nI0527 01:09:56.240847 3769 log.go:172] (0xc00099dad0) (0xc00064c1e0) Create stream\nI0527 01:09:56.240862 3769 log.go:172] (0xc00099dad0) (0xc00064c1e0) Stream added, broadcasting: 3\nI0527 01:09:56.241943 3769 log.go:172] (0xc00099dad0) Reply frame received for 3\nI0527 01:09:56.241976 3769 log.go:172] (0xc00099dad0) (0xc00064d180) Create stream\nI0527 01:09:56.241988 3769 log.go:172] (0xc00099dad0) (0xc00064d180) Stream added, broadcasting: 5\nI0527 01:09:56.242829 3769 log.go:172] (0xc00099dad0) Reply frame received for 5\nI0527 01:09:56.339258 3769 log.go:172] (0xc00099dad0) Data frame received for 5\nI0527 01:09:56.339306 3769 log.go:172] (0xc00064d180) (5) Data frame handling\nI0527 01:09:56.339329 3769 log.go:172] (0xc00064d180) (5) Data frame sent\nI0527 01:09:56.339351 3769 log.go:172] (0xc00099dad0) Data frame received for 5\nI0527 01:09:56.339366 3769 log.go:172] (0xc00064d180) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0527 01:09:56.339388 3769 log.go:172] (0xc00064d180) (5) Data frame sent\nI0527 01:09:56.339492 3769 log.go:172] (0xc00099dad0) Data frame received for 3\nI0527 01:09:56.339512 3769 log.go:172] (0xc00064c1e0) (3) Data frame handling\nI0527 01:09:56.340473 3769 log.go:172] (0xc00099dad0) Data frame received for 5\nI0527 01:09:56.340496 3769 log.go:172] (0xc00064d180) (5) Data frame handling\nI0527 01:09:56.341638 3769 log.go:172] (0xc00099dad0) Data frame received for 1\nI0527 01:09:56.341666 3769 log.go:172] (0xc000af4640) (1) Data frame handling\nI0527 01:09:56.341686 3769 log.go:172] (0xc000af4640) (1) Data frame sent\nI0527 01:09:56.341750 3769 log.go:172] (0xc00099dad0) (0xc000af4640) Stream removed, broadcasting: 1\nI0527 01:09:56.341770 3769 log.go:172] (0xc00099dad0) Go away received\nI0527 01:09:56.342195 3769 log.go:172] (0xc00099dad0) (0xc000af4640) Stream removed, broadcasting: 1\nI0527 01:09:56.342214 3769 log.go:172] (0xc00099dad0) (0xc00064c1e0) Stream removed, broadcasting: 3\nI0527 01:09:56.342223 3769 log.go:172] (0xc00099dad0) (0xc00064d180) Stream removed, broadcasting: 5\n" May 27 01:09:56.346: INFO: stdout: "" May 27 01:09:56.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6923 execpod-affinity6bqlp -- /bin/sh -x -c nc -zv -t -w 2 10.96.252.83 80' May 27 01:09:56.555: INFO: stderr: "I0527 01:09:56.472684 3791 log.go:172] (0xc000a3d080) (0xc0006ebd60) Create stream\nI0527 01:09:56.472745 3791 log.go:172] (0xc000a3d080) (0xc0006ebd60) Stream added, broadcasting: 1\nI0527 01:09:56.477833 3791 log.go:172] (0xc000a3d080) Reply frame received for 1\nI0527 01:09:56.477886 3791 log.go:172] (0xc000a3d080) (0xc0006b28c0) Create stream\nI0527 01:09:56.477901 3791 log.go:172] (0xc000a3d080) (0xc0006b28c0) Stream added, broadcasting: 3\nI0527 01:09:56.478857 3791 log.go:172] (0xc000a3d080) Reply frame received for 3\nI0527 01:09:56.478921 3791 log.go:172] (0xc000a3d080) (0xc000693ae0) Create stream\nI0527 01:09:56.478935 3791 log.go:172] (0xc000a3d080) (0xc000693ae0) Stream added, broadcasting: 5\nI0527 01:09:56.479993 3791 log.go:172] (0xc000a3d080) Reply frame received for 5\nI0527 01:09:56.546427 3791 log.go:172] (0xc000a3d080) Data frame received for 3\nI0527 01:09:56.546470 3791 log.go:172] (0xc0006b28c0) (3) Data frame handling\nI0527 01:09:56.546500 3791 log.go:172] (0xc000a3d080) Data frame received for 5\nI0527 01:09:56.546515 3791 log.go:172] (0xc000693ae0) (5) Data frame handling\nI0527 01:09:56.546529 3791 log.go:172] (0xc000693ae0) (5) Data frame sent\nI0527 01:09:56.546541 3791 log.go:172] (0xc000a3d080) Data frame received for 5\nI0527 01:09:56.546551 3791 log.go:172] (0xc000693ae0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.252.83 80\nConnection to 10.96.252.83 80 port [tcp/http] succeeded!\nI0527 01:09:56.548352 3791 log.go:172] (0xc000a3d080) Data frame received for 1\nI0527 01:09:56.548370 3791 log.go:172] (0xc0006ebd60) (1) Data frame handling\nI0527 01:09:56.548398 3791 log.go:172] (0xc0006ebd60) (1) Data frame sent\nI0527 01:09:56.548476 3791 log.go:172] (0xc000a3d080) (0xc0006ebd60) Stream removed, broadcasting: 1\nI0527 01:09:56.548515 3791 log.go:172] (0xc000a3d080) Go away received\nI0527 01:09:56.548769 3791 log.go:172] (0xc000a3d080) (0xc0006ebd60) Stream removed, broadcasting: 1\nI0527 01:09:56.548784 3791 log.go:172] (0xc000a3d080) (0xc0006b28c0) Stream removed, broadcasting: 3\nI0527 01:09:56.548793 3791 log.go:172] (0xc000a3d080) (0xc000693ae0) Stream removed, broadcasting: 5\n" May 27 01:09:56.555: INFO: stdout: "" May 27 01:09:56.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6923 execpod-affinity6bqlp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30786' May 27 01:09:56.760: INFO: stderr: "I0527 01:09:56.688623 3811 log.go:172] (0xc0009a82c0) (0xc00024e8c0) Create stream\nI0527 01:09:56.688687 3811 log.go:172] (0xc0009a82c0) (0xc00024e8c0) Stream added, broadcasting: 1\nI0527 01:09:56.690827 3811 log.go:172] (0xc0009a82c0) Reply frame received for 1\nI0527 01:09:56.690906 3811 log.go:172] (0xc0009a82c0) (0xc00052e140) Create stream\nI0527 01:09:56.690939 3811 log.go:172] (0xc0009a82c0) (0xc00052e140) Stream added, broadcasting: 3\nI0527 01:09:56.692222 3811 log.go:172] (0xc0009a82c0) Reply frame received for 3\nI0527 01:09:56.692245 3811 log.go:172] (0xc0009a82c0) (0xc000494d20) Create stream\nI0527 01:09:56.692254 3811 log.go:172] (0xc0009a82c0) (0xc000494d20) Stream added, broadcasting: 5\nI0527 01:09:56.693378 3811 log.go:172] (0xc0009a82c0) Reply frame received for 5\nI0527 01:09:56.753331 3811 log.go:172] (0xc0009a82c0) Data frame received for 3\nI0527 01:09:56.753368 3811 log.go:172] (0xc00052e140) (3) Data frame handling\nI0527 01:09:56.753422 3811 log.go:172] (0xc0009a82c0) Data frame received for 5\nI0527 01:09:56.753435 3811 log.go:172] (0xc000494d20) (5) Data frame handling\nI0527 01:09:56.753468 3811 log.go:172] (0xc000494d20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30786\nConnection to 172.17.0.13 30786 port [tcp/30786] succeeded!\nI0527 01:09:56.753632 3811 log.go:172] (0xc0009a82c0) Data frame received for 5\nI0527 01:09:56.753659 3811 log.go:172] (0xc000494d20) (5) Data frame handling\nI0527 01:09:56.754499 3811 log.go:172] (0xc0009a82c0) Data frame received for 1\nI0527 01:09:56.754522 3811 log.go:172] (0xc00024e8c0) (1) Data frame handling\nI0527 01:09:56.754534 3811 log.go:172] (0xc00024e8c0) (1) Data frame sent\nI0527 01:09:56.754691 3811 log.go:172] (0xc0009a82c0) (0xc00024e8c0) Stream removed, broadcasting: 1\nI0527 01:09:56.754842 3811 log.go:172] (0xc0009a82c0) Go away received\nI0527 01:09:56.755162 3811 log.go:172] (0xc0009a82c0) (0xc00024e8c0) Stream removed, broadcasting: 1\nI0527 01:09:56.755187 3811 log.go:172] (0xc0009a82c0) (0xc00052e140) Stream removed, broadcasting: 3\nI0527 01:09:56.755202 3811 log.go:172] (0xc0009a82c0) (0xc000494d20) Stream removed, broadcasting: 5\n" May 27 01:09:56.760: INFO: stdout: "" May 27 01:09:56.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6923 execpod-affinity6bqlp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30786' May 27 01:09:56.961: INFO: stderr: "I0527 01:09:56.888129 3832 log.go:172] (0xc0009ab130) (0xc000ac6280) Create stream\nI0527 01:09:56.888193 3832 log.go:172] (0xc0009ab130) (0xc000ac6280) Stream added, broadcasting: 1\nI0527 01:09:56.892718 3832 log.go:172] (0xc0009ab130) Reply frame received for 1\nI0527 01:09:56.892763 3832 log.go:172] (0xc0009ab130) (0xc000456dc0) Create stream\nI0527 01:09:56.892776 3832 log.go:172] (0xc0009ab130) (0xc000456dc0) Stream added, broadcasting: 3\nI0527 01:09:56.893725 3832 log.go:172] (0xc0009ab130) Reply frame received for 3\nI0527 01:09:56.893792 3832 log.go:172] (0xc0009ab130) (0xc000440500) Create stream\nI0527 01:09:56.893820 3832 log.go:172] (0xc0009ab130) (0xc000440500) Stream added, broadcasting: 5\nI0527 01:09:56.894546 3832 log.go:172] (0xc0009ab130) Reply frame received for 5\nI0527 01:09:56.952996 3832 log.go:172] (0xc0009ab130) Data frame received for 5\nI0527 01:09:56.953028 3832 log.go:172] (0xc000440500) (5) Data frame handling\nI0527 01:09:56.953056 3832 log.go:172] (0xc000440500) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30786\nI0527 01:09:56.953493 3832 log.go:172] (0xc0009ab130) Data frame received for 5\nI0527 01:09:56.953507 3832 log.go:172] (0xc000440500) (5) Data frame handling\nI0527 01:09:56.953514 3832 log.go:172] (0xc000440500) (5) Data frame sent\nConnection to 172.17.0.12 30786 port [tcp/30786] succeeded!\nI0527 01:09:56.953734 3832 log.go:172] (0xc0009ab130) Data frame received for 5\nI0527 01:09:56.953746 3832 log.go:172] (0xc000440500) (5) Data frame handling\nI0527 01:09:56.953960 3832 log.go:172] (0xc0009ab130) Data frame received for 3\nI0527 01:09:56.953976 3832 log.go:172] (0xc000456dc0) (3) Data frame handling\nI0527 01:09:56.955524 3832 log.go:172] (0xc0009ab130) Data frame received for 1\nI0527 01:09:56.955541 3832 log.go:172] (0xc000ac6280) (1) Data frame handling\nI0527 01:09:56.955558 3832 log.go:172] (0xc000ac6280) (1) Data frame sent\nI0527 01:09:56.955574 3832 log.go:172] (0xc0009ab130) (0xc000ac6280) Stream removed, broadcasting: 1\nI0527 01:09:56.955587 3832 log.go:172] (0xc0009ab130) Go away received\nI0527 01:09:56.955982 3832 log.go:172] (0xc0009ab130) (0xc000ac6280) Stream removed, broadcasting: 1\nI0527 01:09:56.955996 3832 log.go:172] (0xc0009ab130) (0xc000456dc0) Stream removed, broadcasting: 3\nI0527 01:09:56.956001 3832 log.go:172] (0xc0009ab130) (0xc000440500) Stream removed, broadcasting: 5\n" May 27 01:09:56.961: INFO: stdout: "" May 27 01:09:56.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6923 execpod-affinity6bqlp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30786/ ; done' May 27 01:09:57.307: INFO: stderr: "I0527 01:09:57.114242 3856 log.go:172] (0xc0000ec370) (0xc0004f8460) Create stream\nI0527 01:09:57.114293 3856 log.go:172] (0xc0000ec370) (0xc0004f8460) Stream added, broadcasting: 1\nI0527 01:09:57.116863 3856 log.go:172] (0xc0000ec370) Reply frame received for 1\nI0527 01:09:57.116912 3856 log.go:172] (0xc0000ec370) (0xc0004d6140) Create stream\nI0527 01:09:57.116928 3856 log.go:172] (0xc0000ec370) (0xc0004d6140) Stream added, broadcasting: 3\nI0527 01:09:57.117865 3856 log.go:172] (0xc0000ec370) Reply frame received for 3\nI0527 01:09:57.117923 3856 log.go:172] (0xc0000ec370) (0xc0004f9860) Create stream\nI0527 01:09:57.117942 3856 log.go:172] (0xc0000ec370) (0xc0004f9860) Stream added, broadcasting: 5\nI0527 01:09:57.118718 3856 log.go:172] (0xc0000ec370) Reply frame received for 5\nI0527 01:09:57.194796 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.194828 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.194836 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.194858 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.194879 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.194898 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.199686 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.199714 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.199740 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.199961 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.199975 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.199993 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.200008 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.200018 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.200026 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.206883 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.206911 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.206930 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.208440 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.208466 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.208485 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.208512 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.208547 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.208587 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.208605 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.208619 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.208677 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.216959 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.216984 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.216998 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.217492 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.217511 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.217524 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.217536 3856 log.go:172] (0xc0000ec370) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.217547 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.217573 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.223436 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.223451 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.223468 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.223813 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.223830 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.223842 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.223852 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.223861 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.223878 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.223888 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.223897 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.223910 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.227303 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.227319 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.227332 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.227719 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.227728 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.227741 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.227758 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.227773 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.227786 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.227798 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.227820 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.227833 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.235418 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.235433 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.235447 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.235775 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.235791 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.235804 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.235854 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.235879 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.235899 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.239600 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.239632 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.239654 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.240028 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.240047 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.240069 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.240078 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.240091 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.240098 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.246284 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.246306 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.246325 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.247030 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.247061 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.247073 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.247096 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.247104 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.247115 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.247133 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.247147 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.247164 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\nI0527 01:09:57.250677 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.250695 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.250732 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.251348 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.251371 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.251385 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.251398 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.251408 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.251415 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.256544 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.256563 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.256602 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.256973 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.256998 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.257007 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.257378 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.257406 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.257432 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.263524 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.263552 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.263574 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.264199 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.264214 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.264221 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0527 01:09:57.264258 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.264274 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.264285 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.264301 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.264311 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.264319 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n http://172.17.0.13:30786/\nI0527 01:09:57.273287 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.273330 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.273363 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.274292 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.274312 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.274333 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.274343 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.274359 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.274367 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.279473 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.279500 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.279523 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.279989 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.280012 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.280026 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.280040 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.280051 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.280073 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.285009 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.285035 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.285055 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.286185 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.286225 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.286266 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.286328 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.286351 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.286373 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.291033 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.291058 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.291077 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.291751 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.291775 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.291803 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.291903 3856 log.go:172] (0xc0004f9860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.291919 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.291933 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.298392 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.298422 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.298451 3856 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0527 01:09:57.299390 3856 log.go:172] (0xc0000ec370) Data frame received for 3\nI0527 01:09:57.299410 3856 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0527 01:09:57.299521 3856 log.go:172] (0xc0000ec370) Data frame received for 5\nI0527 01:09:57.299548 3856 log.go:172] (0xc0004f9860) (5) Data frame handling\nI0527 01:09:57.301738 3856 log.go:172] (0xc0000ec370) Data frame received for 1\nI0527 01:09:57.301758 3856 log.go:172] (0xc0004f8460) (1) Data frame handling\nI0527 01:09:57.301769 3856 log.go:172] (0xc0004f8460) (1) Data frame sent\nI0527 01:09:57.301785 3856 log.go:172] (0xc0000ec370) (0xc0004f8460) Stream removed, broadcasting: 1\nI0527 01:09:57.301804 3856 log.go:172] (0xc0000ec370) Go away received\nI0527 01:09:57.302204 3856 log.go:172] (0xc0000ec370) (0xc0004f8460) Stream removed, broadcasting: 1\nI0527 01:09:57.302224 3856 log.go:172] (0xc0000ec370) (0xc0004d6140) Stream removed, broadcasting: 3\nI0527 01:09:57.302234 3856 log.go:172] (0xc0000ec370) (0xc0004f9860) Stream removed, broadcasting: 5\n" May 27 01:09:57.308: INFO: stdout: "\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-65rrf\naffinity-nodeport-transition-65rrf\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-9mjq5\naffinity-nodeport-transition-9mjq5\naffinity-nodeport-transition-9mjq5\naffinity-nodeport-transition-65rrf\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-65rrf\naffinity-nodeport-transition-9mjq5\naffinity-nodeport-transition-9mjq5\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-65rrf" May 27 01:09:57.308: INFO: Received response from host: May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-65rrf May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-65rrf May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-9mjq5 May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-9mjq5 May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-9mjq5 May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-65rrf May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-65rrf May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-9mjq5 May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-9mjq5 May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.308: INFO: Received response from host: affinity-nodeport-transition-65rrf May 27 01:09:57.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6923 execpod-affinity6bqlp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30786/ ; done' May 27 01:09:57.643: INFO: stderr: "I0527 01:09:57.493399 3877 log.go:172] (0xc00003b6b0) (0xc000633cc0) Create stream\nI0527 01:09:57.493460 3877 log.go:172] (0xc00003b6b0) (0xc000633cc0) Stream added, broadcasting: 1\nI0527 01:09:57.495898 3877 log.go:172] (0xc00003b6b0) Reply frame received for 1\nI0527 01:09:57.495948 3877 log.go:172] (0xc00003b6b0) (0xc0006406e0) Create stream\nI0527 01:09:57.495966 3877 log.go:172] (0xc00003b6b0) (0xc0006406e0) Stream added, broadcasting: 3\nI0527 01:09:57.496672 3877 log.go:172] (0xc00003b6b0) Reply frame received for 3\nI0527 01:09:57.496700 3877 log.go:172] (0xc00003b6b0) (0xc000641040) Create stream\nI0527 01:09:57.496708 3877 log.go:172] (0xc00003b6b0) (0xc000641040) Stream added, broadcasting: 5\nI0527 01:09:57.497470 3877 log.go:172] (0xc00003b6b0) Reply frame received for 5\nI0527 01:09:57.553588 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.553621 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.553633 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.553642 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.553656 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.553664 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.557021 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.557050 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.557072 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.557765 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.557783 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.557792 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.557810 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.557822 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.557828 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.563255 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.563268 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.563299 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.563666 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.563682 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.563697 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curlI0527 01:09:57.563715 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.563769 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.563788 3877 log.go:172] (0xc000641040) (5) Data frame sent\n -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.563801 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.563812 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.563826 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.567637 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.567649 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.567658 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.568070 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.568087 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.568097 3877 log.go:172] (0xc000641040) (5) Data frame sent\nI0527 01:09:57.568106 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.568116 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.568127 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.568135 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.568144 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.568174 3877 log.go:172] (0xc000641040) (5) Data frame sent\nI0527 01:09:57.572068 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.572088 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.572107 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.572584 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.572597 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.572606 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.572623 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.572633 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.572642 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.576974 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.576988 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.576999 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.577729 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.577753 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.577777 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.577801 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.577820 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.577834 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.581563 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.581577 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.581587 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.581868 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.581887 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.581895 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.581907 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.581912 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.581918 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.588537 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.588561 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.588570 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.588757 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.588786 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.588802 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.588824 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.588850 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.588870 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.595414 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.595438 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.595455 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.595695 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.595708 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.595715 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.595735 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.595754 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.595782 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.599855 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.599875 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.599888 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.600208 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.600230 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.600239 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.600252 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.600259 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.600268 3877 log.go:172] (0xc000641040) (5) Data frame sent\nI0527 01:09:57.600280 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.600287 3877 log.go:172] (0xc000641040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.600325 3877 log.go:172] (0xc000641040) (5) Data frame sent\nI0527 01:09:57.604725 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.604743 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.604760 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.605385 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.605402 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.605413 3877 log.go:172] (0xc000641040) (5) Data frame sent\nI0527 01:09:57.605459 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.605503 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.605539 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.610338 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.610355 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.610370 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.611313 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.611336 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.611348 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.611376 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.611384 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.611392 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.615407 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.615447 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.615477 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.615632 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.615656 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.615671 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.615830 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.615843 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.615860 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.619488 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.619506 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.619515 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.619957 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.619972 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.619997 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.620007 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.620020 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.620034 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.623582 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.623598 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.623608 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.624025 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.624044 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.624051 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.624058 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.624062 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.624066 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.628303 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.628318 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.628330 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.628710 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.628732 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.628742 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.628750 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.628758 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.628762 3877 log.go:172] (0xc000641040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30786/\nI0527 01:09:57.632868 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.632881 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.632893 3877 log.go:172] (0xc0006406e0) (3) Data frame sent\nI0527 01:09:57.633806 3877 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0527 01:09:57.633843 3877 log.go:172] (0xc000641040) (5) Data frame handling\nI0527 01:09:57.633881 3877 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0527 01:09:57.633903 3877 log.go:172] (0xc0006406e0) (3) Data frame handling\nI0527 01:09:57.635470 3877 log.go:172] (0xc00003b6b0) Data frame received for 1\nI0527 01:09:57.635499 3877 log.go:172] (0xc000633cc0) (1) Data frame handling\nI0527 01:09:57.635519 3877 log.go:172] (0xc000633cc0) (1) Data frame sent\nI0527 01:09:57.635538 3877 log.go:172] (0xc00003b6b0) (0xc000633cc0) Stream removed, broadcasting: 1\nI0527 01:09:57.635646 3877 log.go:172] (0xc00003b6b0) Go away received\nI0527 01:09:57.635917 3877 log.go:172] (0xc00003b6b0) (0xc000633cc0) Stream removed, broadcasting: 1\nI0527 01:09:57.635935 3877 log.go:172] (0xc00003b6b0) (0xc0006406e0) Stream removed, broadcasting: 3\nI0527 01:09:57.635946 3877 log.go:172] (0xc00003b6b0) (0xc000641040) Stream removed, broadcasting: 5\n" May 27 01:09:57.643: INFO: stdout: "\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff\naffinity-nodeport-transition-xt9ff" May 27 01:09:57.644: INFO: Received response from host: May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Received response from host: affinity-nodeport-transition-xt9ff May 27 01:09:57.644: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6923, will wait for the garbage collector to delete the pods May 27 01:09:57.828: INFO: Deleting ReplicationController affinity-nodeport-transition took: 87.174122ms May 27 01:09:58.328: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.249516ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:10:14.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6923" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:33.263 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":268,"skipped":4617,"failed":0} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:10:14.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6485, will wait for the garbage collector to delete the pods May 27 01:10:21.125: INFO: Deleting Job.batch foo took: 6.510039ms May 27 01:10:21.225: INFO: Terminating Job.batch foo pods took: 100.624644ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:11:05.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6485" for this suite. • [SLOW TEST:50.376 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":269,"skipped":4618,"failed":0} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:11:05.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-3974 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3974 to expose endpoints map[] May 27 01:11:05.476: INFO: Get endpoints failed (15.729704ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 27 01:11:06.481: INFO: successfully validated that service multi-endpoint-test in namespace services-3974 exposes endpoints map[] (1.020668927s elapsed) STEP: Creating pod pod1 in namespace services-3974 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3974 to expose endpoints map[pod1:[100]] May 27 01:11:12.116: INFO: successfully validated that service multi-endpoint-test in namespace services-3974 exposes endpoints map[pod1:[100]] (5.626941012s elapsed) STEP: Creating pod pod2 in namespace services-3974 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3974 to expose endpoints map[pod1:[100] pod2:[101]] May 27 01:11:15.298: INFO: successfully validated that service multi-endpoint-test in namespace services-3974 exposes endpoints map[pod1:[100] pod2:[101]] (3.176678204s elapsed) STEP: Deleting pod pod1 in namespace services-3974 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3974 to expose endpoints map[pod2:[101]] May 27 01:11:16.359: INFO: successfully validated that service multi-endpoint-test in namespace services-3974 exposes endpoints map[pod2:[101]] (1.057383448s elapsed) STEP: Deleting pod pod2 in namespace services-3974 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3974 to expose endpoints map[] May 27 01:11:17.484: INFO: successfully validated that service multi-endpoint-test in namespace services-3974 exposes endpoints map[] (1.118881512s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:11:17.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3974" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.393 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":270,"skipped":4620,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:11:17.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 01:11:18.195: INFO: Create a RollingUpdate DaemonSet May 27 01:11:18.198: INFO: Check that daemon pods launch on every node of the cluster May 27 01:11:18.228: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:11:18.244: INFO: Number of nodes with available pods: 0 May 27 01:11:18.244: INFO: Node latest-worker is running more than one daemon pod May 27 01:11:19.250: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:11:19.255: INFO: Number of nodes with available pods: 0 May 27 01:11:19.255: INFO: Node latest-worker is running more than one daemon pod May 27 01:11:20.259: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:11:20.264: INFO: Number of nodes with available pods: 0 May 27 01:11:20.264: INFO: Node latest-worker is running more than one daemon pod May 27 01:11:21.267: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:11:21.335: INFO: Number of nodes with available pods: 0 May 27 01:11:21.335: INFO: Node latest-worker is running more than one daemon pod May 27 01:11:22.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:11:22.275: INFO: Number of nodes with available pods: 1 May 27 01:11:22.275: INFO: Node latest-worker2 is running more than one daemon pod May 27 01:11:23.260: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:11:23.264: INFO: Number of nodes with available pods: 2 May 27 01:11:23.264: INFO: Number of running nodes: 2, number of available pods: 2 May 27 01:11:23.264: INFO: Update the DaemonSet to trigger a rollout May 27 01:11:23.271: INFO: Updating DaemonSet daemon-set May 27 01:11:35.290: INFO: Roll back the DaemonSet before rollout is complete May 27 01:11:35.297: INFO: Updating DaemonSet daemon-set May 27 01:11:35.297: INFO: Make sure DaemonSet rollback is complete May 27 01:11:35.334: INFO: Wrong image for pod: daemon-set-4lvwc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 27 01:11:35.334: INFO: Pod daemon-set-4lvwc is not available May 27 01:11:35.353: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:11:36.357: INFO: Wrong image for pod: daemon-set-4lvwc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 27 01:11:36.357: INFO: Pod daemon-set-4lvwc is not available May 27 01:11:36.362: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 27 01:11:37.382: INFO: Pod daemon-set-k4vzb is not available May 27 01:11:37.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4883, will wait for the garbage collector to delete the pods May 27 01:11:37.475: INFO: Deleting DaemonSet.extensions daemon-set took: 4.904354ms May 27 01:11:37.876: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.238916ms May 27 01:11:45.292: INFO: Number of nodes with available pods: 0 May 27 01:11:45.292: INFO: Number of running nodes: 0, number of available pods: 0 May 27 01:11:45.294: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4883/daemonsets","resourceVersion":"7964396"},"items":null} May 27 01:11:45.296: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4883/pods","resourceVersion":"7964396"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:11:45.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4883" for this suite. • [SLOW TEST:27.582 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":271,"skipped":4628,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:11:45.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 27 01:11:45.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5111' May 27 01:11:45.614: INFO: stderr: "" May 27 01:11:45.614: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 27 01:11:45.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5111' May 27 01:11:55.235: INFO: stderr: "" May 27 01:11:55.235: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:11:55.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5111" for this suite. • [SLOW TEST:9.963 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":272,"skipped":4643,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:11:55.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 01:11:55.368: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:12:01.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7677" for this suite. • [SLOW TEST:6.340 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4644,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:12:01.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:12:17.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8095" for this suite. • [SLOW TEST:16.094 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":274,"skipped":4651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:12:17.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-948a2e2e-756a-48e2-beea-c129120a9371 STEP: Creating configMap with name cm-test-opt-upd-bfc9b9b7-9ef5-4413-a738-735d5efd8325 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-948a2e2e-756a-48e2-beea-c129120a9371 STEP: Updating configmap cm-test-opt-upd-bfc9b9b7-9ef5-4413-a738-735d5efd8325 STEP: Creating configMap with name cm-test-opt-create-3e0414d1-4c76-4332-a97d-6c042f52ceae STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:13:30.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9027" for this suite. • [SLOW TEST:72.908 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4688,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:13:30.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 01:13:30.694: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 27 01:13:30.746: INFO: Pod name sample-pod: Found 0 pods out of 1 May 27 01:13:35.752: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 27 01:13:35.752: INFO: Creating deployment "test-rolling-update-deployment" May 27 01:13:35.758: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 27 01:13:35.794: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 27 01:13:37.803: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 27 01:13:37.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138815, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138815, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138815, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138815, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 01:13:39.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138815, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138815, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138815, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138815, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 01:13:41.809: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 27 01:13:41.820: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6122 /apis/apps/v1/namespaces/deployment-6122/deployments/test-rolling-update-deployment 2200d49f-52e8-46ab-9a57-59d71dd03c47 7964986 1 2020-05-27 01:13:35 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-27 01:13:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-27 01:13:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003ddfe58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-27 01:13:35 +0000 UTC,LastTransitionTime:2020-05-27 01:13:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-27 01:13:40 +0000 UTC,LastTransitionTime:2020-05-27 01:13:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 27 01:13:41.822: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-6122 /apis/apps/v1/namespaces/deployment-6122/replicasets/test-rolling-update-deployment-df7bb669b a2a7909d-3475-4727-bc4c-5e7711293f34 7964975 1 2020-05-27 01:13:35 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2200d49f-52e8-46ab-9a57-59d71dd03c47 0xc003e21e70 0xc003e21e71}] [] [{kube-controller-manager Update apps/v1 2020-05-27 01:13:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2200d49f-52e8-46ab-9a57-59d71dd03c47\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e21ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 27 01:13:41.822: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 27 01:13:41.822: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6122 /apis/apps/v1/namespaces/deployment-6122/replicasets/test-rolling-update-controller 9d1fd038-13fa-431e-9ee5-91e6f7022fe9 7964985 2 2020-05-27 01:13:30 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2200d49f-52e8-46ab-9a57-59d71dd03c47 0xc003e21d67 0xc003e21d68}] [] [{e2e.test Update apps/v1 2020-05-27 01:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-27 01:13:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2200d49f-52e8-46ab-9a57-59d71dd03c47\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003e21e08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 27 01:13:41.825: INFO: Pod "test-rolling-update-deployment-df7bb669b-tbtlz" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-tbtlz test-rolling-update-deployment-df7bb669b- deployment-6122 /api/v1/namespaces/deployment-6122/pods/test-rolling-update-deployment-df7bb669b-tbtlz a1d2b003-615a-478f-b32f-a16127dd83ec 7964974 0 2020-05-27 01:13:35 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b a2a7909d-3475-4727-bc4c-5e7711293f34 0xc003cce020 0xc003cce021}] [] [{kube-controller-manager Update v1 2020-05-27 01:13:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2a7909d-3475-4727-bc4c-5e7711293f34\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-27 01:13:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.5\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-22khr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-22khr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-22khr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 01:13:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 01:13:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 01:13:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 01:13:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.5,StartTime:2020-05-27 01:13:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-27 01:13:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://4943da86959f808c7c13e6b397dd55947eca8172c00e4deec7362ad8831497f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:13:41.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6122" for this suite. • [SLOW TEST:11.207 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":276,"skipped":4699,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:13:41.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-f9151141-4da4-499e-aad3-27248f8f7b26 STEP: Creating a pod to test consume configMaps May 27 01:13:41.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-dec4daec-c45e-43b0-a56e-a8d33f486843" in namespace "configmap-1539" to be "Succeeded or Failed" May 27 01:13:41.987: INFO: Pod "pod-configmaps-dec4daec-c45e-43b0-a56e-a8d33f486843": Phase="Pending", Reason="", readiness=false. Elapsed: 8.935721ms May 27 01:13:43.992: INFO: Pod "pod-configmaps-dec4daec-c45e-43b0-a56e-a8d33f486843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013641934s May 27 01:13:46.006: INFO: Pod "pod-configmaps-dec4daec-c45e-43b0-a56e-a8d33f486843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028516094s STEP: Saw pod success May 27 01:13:46.006: INFO: Pod "pod-configmaps-dec4daec-c45e-43b0-a56e-a8d33f486843" satisfied condition "Succeeded or Failed" May 27 01:13:46.010: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-dec4daec-c45e-43b0-a56e-a8d33f486843 container configmap-volume-test: STEP: delete the pod May 27 01:13:46.041: INFO: Waiting for pod pod-configmaps-dec4daec-c45e-43b0-a56e-a8d33f486843 to disappear May 27 01:13:46.045: INFO: Pod pod-configmaps-dec4daec-c45e-43b0-a56e-a8d33f486843 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:13:46.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1539" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4700,"failed":0} SSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:13:46.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-974 STEP: creating service affinity-clusterip in namespace services-974 STEP: creating replication controller affinity-clusterip in namespace services-974 I0527 01:13:46.461793 8 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-974, replica count: 3 I0527 01:13:49.512194 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0527 01:13:52.512441 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 27 01:13:52.559: INFO: Creating new exec pod May 27 01:13:57.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-974 execpod-affinity8dr5n -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 27 01:13:57.826: INFO: stderr: "I0527 01:13:57.735140 3933 log.go:172] (0xc000a15550) (0xc000840f00) Create stream\nI0527 01:13:57.735205 3933 log.go:172] (0xc000a15550) (0xc000840f00) Stream added, broadcasting: 1\nI0527 01:13:57.737266 3933 log.go:172] (0xc000a15550) Reply frame received for 1\nI0527 01:13:57.737305 3933 log.go:172] (0xc000a15550) (0xc0005ccc80) Create stream\nI0527 01:13:57.737375 3933 log.go:172] (0xc000a15550) (0xc0005ccc80) Stream added, broadcasting: 3\nI0527 01:13:57.738197 3933 log.go:172] (0xc000a15550) Reply frame received for 3\nI0527 01:13:57.738229 3933 log.go:172] (0xc000a15550) (0xc000ab20a0) Create stream\nI0527 01:13:57.738242 3933 log.go:172] (0xc000a15550) (0xc000ab20a0) Stream added, broadcasting: 5\nI0527 01:13:57.738866 3933 log.go:172] (0xc000a15550) Reply frame received for 5\nI0527 01:13:57.806172 3933 log.go:172] (0xc000a15550) Data frame received for 5\nI0527 01:13:57.806209 3933 log.go:172] (0xc000ab20a0) (5) Data frame handling\nI0527 01:13:57.806234 3933 log.go:172] (0xc000ab20a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0527 01:13:57.817096 3933 log.go:172] (0xc000a15550) Data frame received for 5\nI0527 01:13:57.817279 3933 log.go:172] (0xc000ab20a0) (5) Data frame handling\nI0527 01:13:57.817321 3933 log.go:172] (0xc000ab20a0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0527 01:13:57.817997 3933 log.go:172] (0xc000a15550) Data frame received for 3\nI0527 01:13:57.818039 3933 log.go:172] (0xc0005ccc80) (3) Data frame handling\nI0527 01:13:57.818090 3933 log.go:172] (0xc000a15550) Data frame received for 5\nI0527 01:13:57.818122 3933 log.go:172] (0xc000ab20a0) (5) Data frame handling\nI0527 01:13:57.820230 3933 log.go:172] (0xc000a15550) Data frame received for 1\nI0527 01:13:57.820264 3933 log.go:172] (0xc000840f00) (1) Data frame handling\nI0527 01:13:57.820282 3933 log.go:172] (0xc000840f00) (1) Data frame sent\nI0527 01:13:57.820296 3933 log.go:172] (0xc000a15550) (0xc000840f00) Stream removed, broadcasting: 1\nI0527 01:13:57.820321 3933 log.go:172] (0xc000a15550) Go away received\nI0527 01:13:57.820762 3933 log.go:172] (0xc000a15550) (0xc000840f00) Stream removed, broadcasting: 1\nI0527 01:13:57.820787 3933 log.go:172] (0xc000a15550) (0xc0005ccc80) Stream removed, broadcasting: 3\nI0527 01:13:57.820807 3933 log.go:172] (0xc000a15550) (0xc000ab20a0) Stream removed, broadcasting: 5\n" May 27 01:13:57.826: INFO: stdout: "" May 27 01:13:57.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-974 execpod-affinity8dr5n -- /bin/sh -x -c nc -zv -t -w 2 10.105.114.227 80' May 27 01:13:58.018: INFO: stderr: "I0527 01:13:57.954706 3952 log.go:172] (0xc0009aa000) (0xc0006dc500) Create stream\nI0527 01:13:57.954764 3952 log.go:172] (0xc0009aa000) (0xc0006dc500) Stream added, broadcasting: 1\nI0527 01:13:57.956498 3952 log.go:172] (0xc0009aa000) Reply frame received for 1\nI0527 01:13:57.956541 3952 log.go:172] (0xc0009aa000) (0xc0005321e0) Create stream\nI0527 01:13:57.956551 3952 log.go:172] (0xc0009aa000) (0xc0005321e0) Stream added, broadcasting: 3\nI0527 01:13:57.957674 3952 log.go:172] (0xc0009aa000) Reply frame received for 3\nI0527 01:13:57.957727 3952 log.go:172] (0xc0009aa000) (0xc000454d20) Create stream\nI0527 01:13:57.957758 3952 log.go:172] (0xc0009aa000) (0xc000454d20) Stream added, broadcasting: 5\nI0527 01:13:57.958582 3952 log.go:172] (0xc0009aa000) Reply frame received for 5\nI0527 01:13:58.011039 3952 log.go:172] (0xc0009aa000) Data frame received for 5\nI0527 01:13:58.011089 3952 log.go:172] (0xc000454d20) (5) Data frame handling\nI0527 01:13:58.011126 3952 log.go:172] (0xc000454d20) (5) Data frame sent\nI0527 01:13:58.011144 3952 log.go:172] (0xc0009aa000) Data frame received for 5\nI0527 01:13:58.011161 3952 log.go:172] (0xc000454d20) (5) Data frame handling\nI0527 01:13:58.011245 3952 log.go:172] (0xc0009aa000) Data frame received for 3\nI0527 01:13:58.011266 3952 log.go:172] (0xc0005321e0) (3) Data frame handling\n+ nc -zv -t -w 2 10.105.114.227 80\nConnection to 10.105.114.227 80 port [tcp/http] succeeded!\nI0527 01:13:58.012870 3952 log.go:172] (0xc0009aa000) Data frame received for 1\nI0527 01:13:58.012889 3952 log.go:172] (0xc0006dc500) (1) Data frame handling\nI0527 01:13:58.012911 3952 log.go:172] (0xc0006dc500) (1) Data frame sent\nI0527 01:13:58.012928 3952 log.go:172] (0xc0009aa000) (0xc0006dc500) Stream removed, broadcasting: 1\nI0527 01:13:58.012940 3952 log.go:172] (0xc0009aa000) Go away received\nI0527 01:13:58.013533 3952 log.go:172] (0xc0009aa000) (0xc0006dc500) Stream removed, broadcasting: 1\nI0527 01:13:58.013555 3952 log.go:172] (0xc0009aa000) (0xc0005321e0) Stream removed, broadcasting: 3\nI0527 01:13:58.013569 3952 log.go:172] (0xc0009aa000) (0xc000454d20) Stream removed, broadcasting: 5\n" May 27 01:13:58.018: INFO: stdout: "" May 27 01:13:58.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-974 execpod-affinity8dr5n -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.114.227:80/ ; done' May 27 01:13:58.329: INFO: stderr: "I0527 01:13:58.156924 3972 log.go:172] (0xc0004b74a0) (0xc000688500) Create stream\nI0527 01:13:58.156986 3972 log.go:172] (0xc0004b74a0) (0xc000688500) Stream added, broadcasting: 1\nI0527 01:13:58.159916 3972 log.go:172] (0xc0004b74a0) Reply frame received for 1\nI0527 01:13:58.159960 3972 log.go:172] (0xc0004b74a0) (0xc000688e60) Create stream\nI0527 01:13:58.159970 3972 log.go:172] (0xc0004b74a0) (0xc000688e60) Stream added, broadcasting: 3\nI0527 01:13:58.160925 3972 log.go:172] (0xc0004b74a0) Reply frame received for 3\nI0527 01:13:58.160971 3972 log.go:172] (0xc0004b74a0) (0xc00025cd20) Create stream\nI0527 01:13:58.160991 3972 log.go:172] (0xc0004b74a0) (0xc00025cd20) Stream added, broadcasting: 5\nI0527 01:13:58.162488 3972 log.go:172] (0xc0004b74a0) Reply frame received for 5\nI0527 01:13:58.228381 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.228408 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.228431 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ seq 0 15\nI0527 01:13:58.236434 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.236458 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.236476 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.236655 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.236689 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.236717 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.244842 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.244866 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.244889 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.246301 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.246335 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.246346 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.246360 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.246372 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.246385 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.251685 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.251714 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.251745 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.252207 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.252231 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.252260 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.252321 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.252344 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.252364 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.255886 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.255915 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.255937 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.255951 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.255962 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.256036 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.256074 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.256094 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.256118 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.259606 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.259628 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.259778 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.260665 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.260718 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.260740 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.260771 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.260836 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.260871 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.264866 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.264880 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.264896 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.265669 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.265689 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.265720 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.265746 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.265771 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.265795 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.269960 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.269976 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.269984 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.270847 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.270871 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.270898 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.270913 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\nI0527 01:13:58.270924 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.270937 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.270957 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\nI0527 01:13:58.270972 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.270999 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.274872 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.274896 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.274916 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.275655 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.275698 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.275724 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.275759 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.275777 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.275791 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.282780 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.282802 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.282814 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.283180 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.283206 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.283223 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.283254 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.283267 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.283281 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.288062 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.288083 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.288102 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.288528 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.288552 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.288582 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.288595 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.288620 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.288655 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.293302 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.293334 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.293347 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.293658 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.293679 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.293694 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\nI0527 01:13:58.293790 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.293819 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.293844 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.293859 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.293889 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.293914 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.297852 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.297876 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.297901 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.298255 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.298275 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.298292 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.298309 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.298331 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.298357 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.304421 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.304434 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.304441 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.305090 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.305309 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.305342 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.305360 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.305378 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.305395 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.309268 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.309284 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.309290 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.309772 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.309782 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.309788 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.309795 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.309807 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.309813 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.313092 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.313239 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.313263 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.313770 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.313791 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.313801 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.313814 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.313821 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.313828 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.316921 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.316935 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.316944 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.317474 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.317496 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.317522 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.317540 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.317564 3972 log.go:172] (0xc00025cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.114.227:80/\nI0527 01:13:58.317581 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.320684 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.320726 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.320752 3972 log.go:172] (0xc000688e60) (3) Data frame sent\nI0527 01:13:58.320918 3972 log.go:172] (0xc0004b74a0) Data frame received for 5\nI0527 01:13:58.321004 3972 log.go:172] (0xc00025cd20) (5) Data frame handling\nI0527 01:13:58.321019 3972 log.go:172] (0xc0004b74a0) Data frame received for 3\nI0527 01:13:58.321030 3972 log.go:172] (0xc000688e60) (3) Data frame handling\nI0527 01:13:58.322966 3972 log.go:172] (0xc0004b74a0) Data frame received for 1\nI0527 01:13:58.322986 3972 log.go:172] (0xc000688500) (1) Data frame handling\nI0527 01:13:58.323002 3972 log.go:172] (0xc000688500) (1) Data frame sent\nI0527 01:13:58.323030 3972 log.go:172] (0xc0004b74a0) (0xc000688500) Stream removed, broadcasting: 1\nI0527 01:13:58.323047 3972 log.go:172] (0xc0004b74a0) Go away received\nI0527 01:13:58.323416 3972 log.go:172] (0xc0004b74a0) (0xc000688500) Stream removed, broadcasting: 1\nI0527 01:13:58.323437 3972 log.go:172] (0xc0004b74a0) (0xc000688e60) Stream removed, broadcasting: 3\nI0527 01:13:58.323450 3972 log.go:172] (0xc0004b74a0) (0xc00025cd20) Stream removed, broadcasting: 5\n" May 27 01:13:58.329: INFO: stdout: "\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx\naffinity-clusterip-fpfnx" May 27 01:13:58.329: INFO: Received response from host: May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Received response from host: affinity-clusterip-fpfnx May 27 01:13:58.329: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-974, will wait for the garbage collector to delete the pods May 27 01:13:58.451: INFO: Deleting ReplicationController affinity-clusterip took: 24.143372ms May 27 01:13:58.852: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.230296ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:14:15.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-974" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.989 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":278,"skipped":4705,"failed":0} SSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:14:15.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:14:15.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9271" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":279,"skipped":4708,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:14:15.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 27 01:14:15.279: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:14:32.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9498" for this suite. • [SLOW TEST:17.597 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":280,"skipped":4709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:14:32.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-7817/configmap-test-2ecf96e7-028d-452f-ae06-4bc2d32cc2f1 STEP: Creating a pod to test consume configMaps May 27 01:14:32.859: INFO: Waiting up to 5m0s for pod "pod-configmaps-719c92b1-a1df-4699-aabf-e9aefd3ab219" in namespace "configmap-7817" to be "Succeeded or Failed" May 27 01:14:32.875: INFO: Pod "pod-configmaps-719c92b1-a1df-4699-aabf-e9aefd3ab219": Phase="Pending", Reason="", readiness=false. Elapsed: 16.135354ms May 27 01:14:34.880: INFO: Pod "pod-configmaps-719c92b1-a1df-4699-aabf-e9aefd3ab219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020793374s May 27 01:14:36.884: INFO: Pod "pod-configmaps-719c92b1-a1df-4699-aabf-e9aefd3ab219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024966923s STEP: Saw pod success May 27 01:14:36.884: INFO: Pod "pod-configmaps-719c92b1-a1df-4699-aabf-e9aefd3ab219" satisfied condition "Succeeded or Failed" May 27 01:14:36.887: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-719c92b1-a1df-4699-aabf-e9aefd3ab219 container env-test: STEP: delete the pod May 27 01:14:36.924: INFO: Waiting for pod pod-configmaps-719c92b1-a1df-4699-aabf-e9aefd3ab219 to disappear May 27 01:14:36.928: INFO: Pod pod-configmaps-719c92b1-a1df-4699-aabf-e9aefd3ab219 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:14:36.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7817" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":281,"skipped":4736,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:14:36.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-725 STEP: creating a selector STEP: Creating the service pods in kubernetes May 27 01:14:36.985: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 27 01:14:37.067: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 01:14:39.283: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 27 01:14:41.145: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:14:43.070: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:14:45.072: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:14:47.072: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:14:49.072: INFO: The status of Pod netserver-0 is Running (Ready = false) May 27 01:14:51.072: INFO: The status of Pod netserver-0 is Running (Ready = true) May 27 01:14:51.078: INFO: The status of Pod netserver-1 is Running (Ready = false) May 27 01:14:53.082: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 27 01:14:57.138: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.253:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-725 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 01:14:57.138: INFO: >>> kubeConfig: /root/.kube/config I0527 01:14:57.194648 8 log.go:172] (0xc002aaa420) (0xc002a76140) Create stream I0527 01:14:57.194698 8 log.go:172] (0xc002aaa420) (0xc002a76140) Stream added, broadcasting: 1 I0527 01:14:57.197039 8 log.go:172] (0xc002aaa420) Reply frame received for 1 I0527 01:14:57.197095 8 log.go:172] (0xc002aaa420) (0xc001058d20) Create stream I0527 01:14:57.197284 8 log.go:172] (0xc002aaa420) (0xc001058d20) Stream added, broadcasting: 3 I0527 01:14:57.198320 8 log.go:172] (0xc002aaa420) Reply frame received for 3 I0527 01:14:57.198355 8 log.go:172] (0xc002aaa420) (0xc000317860) Create stream I0527 01:14:57.198367 8 log.go:172] (0xc002aaa420) (0xc000317860) Stream added, broadcasting: 5 I0527 01:14:57.199384 8 log.go:172] (0xc002aaa420) Reply frame received for 5 I0527 01:14:57.265420 8 log.go:172] (0xc002aaa420) Data frame received for 3 I0527 01:14:57.265442 8 log.go:172] (0xc001058d20) (3) Data frame handling I0527 01:14:57.265458 8 log.go:172] (0xc001058d20) (3) Data frame sent I0527 01:14:57.265887 8 log.go:172] (0xc002aaa420) Data frame received for 3 I0527 01:14:57.265913 8 log.go:172] (0xc001058d20) (3) Data frame handling I0527 01:14:57.265956 8 log.go:172] (0xc002aaa420) Data frame received for 5 I0527 01:14:57.265970 8 log.go:172] (0xc000317860) (5) Data frame handling I0527 01:14:57.267916 8 log.go:172] (0xc002aaa420) Data frame received for 1 I0527 01:14:57.267949 8 log.go:172] (0xc002a76140) (1) Data frame handling I0527 01:14:57.267976 8 log.go:172] (0xc002a76140) (1) Data frame sent I0527 01:14:57.268011 8 log.go:172] (0xc002aaa420) (0xc002a76140) Stream removed, broadcasting: 1 I0527 01:14:57.268045 8 log.go:172] (0xc002aaa420) Go away received I0527 01:14:57.268166 8 log.go:172] (0xc002aaa420) (0xc002a76140) Stream removed, broadcasting: 1 I0527 01:14:57.268205 8 log.go:172] (0xc002aaa420) (0xc001058d20) Stream removed, broadcasting: 3 I0527 01:14:57.268230 8 log.go:172] (0xc002aaa420) (0xc000317860) Stream removed, broadcasting: 5 May 27 01:14:57.268: INFO: Found all expected endpoints: [netserver-0] May 27 01:14:57.271: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.10:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-725 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 27 01:14:57.271: INFO: >>> kubeConfig: /root/.kube/config I0527 01:14:57.305703 8 log.go:172] (0xc002aaaa50) (0xc002a76fa0) Create stream I0527 01:14:57.305735 8 log.go:172] (0xc002aaaa50) (0xc002a76fa0) Stream added, broadcasting: 1 I0527 01:14:57.307610 8 log.go:172] (0xc002aaaa50) Reply frame received for 1 I0527 01:14:57.307633 8 log.go:172] (0xc002aaaa50) (0xc002a77040) Create stream I0527 01:14:57.307641 8 log.go:172] (0xc002aaaa50) (0xc002a77040) Stream added, broadcasting: 3 I0527 01:14:57.308598 8 log.go:172] (0xc002aaaa50) Reply frame received for 3 I0527 01:14:57.308620 8 log.go:172] (0xc002aaaa50) (0xc001058e60) Create stream I0527 01:14:57.308627 8 log.go:172] (0xc002aaaa50) (0xc001058e60) Stream added, broadcasting: 5 I0527 01:14:57.309688 8 log.go:172] (0xc002aaaa50) Reply frame received for 5 I0527 01:14:57.515908 8 log.go:172] (0xc002aaaa50) Data frame received for 5 I0527 01:14:57.515952 8 log.go:172] (0xc001058e60) (5) Data frame handling I0527 01:14:57.515978 8 log.go:172] (0xc002aaaa50) Data frame received for 3 I0527 01:14:57.515990 8 log.go:172] (0xc002a77040) (3) Data frame handling I0527 01:14:57.516008 8 log.go:172] (0xc002a77040) (3) Data frame sent I0527 01:14:57.516024 8 log.go:172] (0xc002aaaa50) Data frame received for 3 I0527 01:14:57.516047 8 log.go:172] (0xc002a77040) (3) Data frame handling I0527 01:14:57.517319 8 log.go:172] (0xc002aaaa50) Data frame received for 1 I0527 01:14:57.517402 8 log.go:172] (0xc002a76fa0) (1) Data frame handling I0527 01:14:57.517444 8 log.go:172] (0xc002a76fa0) (1) Data frame sent I0527 01:14:57.517485 8 log.go:172] (0xc002aaaa50) (0xc002a76fa0) Stream removed, broadcasting: 1 I0527 01:14:57.517510 8 log.go:172] (0xc002aaaa50) Go away received I0527 01:14:57.517566 8 log.go:172] (0xc002aaaa50) (0xc002a76fa0) Stream removed, broadcasting: 1 I0527 01:14:57.517581 8 log.go:172] (0xc002aaaa50) (0xc002a77040) Stream removed, broadcasting: 3 I0527 01:14:57.517588 8 log.go:172] (0xc002aaaa50) (0xc001058e60) Stream removed, broadcasting: 5 May 27 01:14:57.517: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:14:57.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-725" for this suite. • [SLOW TEST:20.590 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:14:57.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 27 01:14:57.659: INFO: Pod name pod-release: Found 0 pods out of 1 May 27 01:15:02.666: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:15:03.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5086" for this suite. • [SLOW TEST:5.672 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":283,"skipped":4771,"failed":0} [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:15:03.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 27 01:15:03.867: INFO: Creating deployment "test-recreate-deployment" May 27 01:15:04.151: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 27 01:15:04.351: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 27 01:15:06.359: INFO: Waiting deployment "test-recreate-deployment" to complete May 27 01:15:06.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138904, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138904, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138905, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138904, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 01:15:08.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138904, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138904, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138905, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726138904, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 27 01:15:10.370: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 27 01:15:10.382: INFO: Updating deployment test-recreate-deployment May 27 01:15:10.382: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 27 01:15:11.100: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6229 /apis/apps/v1/namespaces/deployment-6229/deployments/test-recreate-deployment d95aa3d4-a17f-4ea2-9bf8-5674c98a6e22 7965630 2 2020-05-27 01:15:03 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-27 01:15:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-27 01:15:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004003de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-27 01:15:10 +0000 UTC,LastTransitionTime:2020-05-27 01:15:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-27 01:15:10 +0000 UTC,LastTransitionTime:2020-05-27 01:15:04 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 27 01:15:11.111: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-6229 /apis/apps/v1/namespaces/deployment-6229/replicasets/test-recreate-deployment-d5667d9c7 7a2b734c-182a-47d9-ab96-ac5f3fc32d72 7965628 1 2020-05-27 01:15:10 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d95aa3d4-a17f-4ea2-9bf8-5674c98a6e22 0xc003cdc420 0xc003cdc421}] [] [{kube-controller-manager Update apps/v1 2020-05-27 01:15:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d95aa3d4-a17f-4ea2-9bf8-5674c98a6e22\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003cdc4c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 27 01:15:11.111: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 27 01:15:11.111: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-6229 /apis/apps/v1/namespaces/deployment-6229/replicasets/test-recreate-deployment-6d65b9f6d8 55bd733b-721c-4707-8630-4564b2135de9 7965619 2 2020-05-27 01:15:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d95aa3d4-a17f-4ea2-9bf8-5674c98a6e22 0xc003cdc307 0xc003cdc308}] [] [{kube-controller-manager Update apps/v1 2020-05-27 01:15:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d95aa3d4-a17f-4ea2-9bf8-5674c98a6e22\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003cdc3a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 27 01:15:11.115: INFO: Pod "test-recreate-deployment-d5667d9c7-w8b9p" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-w8b9p test-recreate-deployment-d5667d9c7- deployment-6229 /api/v1/namespaces/deployment-6229/pods/test-recreate-deployment-d5667d9c7-w8b9p 71e1732f-7ba0-45c5-bfe5-7813b58b0021 7965633 0 2020-05-27 01:15:10 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 7a2b734c-182a-47d9-ab96-ac5f3fc32d72 0xc003cdcb00 0xc003cdcb01}] [] [{kube-controller-manager Update v1 2020-05-27 01:15:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a2b734c-182a-47d9-ab96-ac5f3fc32d72\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-27 01:15:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gkq5r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gkq5r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gkq5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 01:15:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 01:15:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 01:15:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-27 01:15:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-27 01:15:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:15:11.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6229" for this suite. • [SLOW TEST:7.924 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":284,"skipped":4771,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:15:11.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:15:16.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5551" for this suite. • [SLOW TEST:5.491 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":285,"skipped":4773,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:15:16.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-85ad1f41-f1d7-4efd-a1a2-d166db6570b7 STEP: Creating a pod to test consume configMaps May 27 01:15:16.701: INFO: Waiting up to 5m0s for pod "pod-configmaps-28a83b79-852e-4913-a4f8-2f816227d8ef" in namespace "configmap-1186" to be "Succeeded or Failed" May 27 01:15:16.738: INFO: Pod "pod-configmaps-28a83b79-852e-4913-a4f8-2f816227d8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 36.768391ms May 27 01:15:18.755: INFO: Pod "pod-configmaps-28a83b79-852e-4913-a4f8-2f816227d8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054069802s May 27 01:15:20.761: INFO: Pod "pod-configmaps-28a83b79-852e-4913-a4f8-2f816227d8ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059423108s STEP: Saw pod success May 27 01:15:20.761: INFO: Pod "pod-configmaps-28a83b79-852e-4913-a4f8-2f816227d8ef" satisfied condition "Succeeded or Failed" May 27 01:15:20.764: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-28a83b79-852e-4913-a4f8-2f816227d8ef container configmap-volume-test: STEP: delete the pod May 27 01:15:20.884: INFO: Waiting for pod pod-configmaps-28a83b79-852e-4913-a4f8-2f816227d8ef to disappear May 27 01:15:20.888: INFO: Pod pod-configmaps-28a83b79-852e-4913-a4f8-2f816227d8ef no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:15:20.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1186" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":286,"skipped":4776,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:15:20.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 27 01:15:20.945: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 27 01:15:20.971: INFO: Waiting for terminating namespaces to be deleted... May 27 01:15:20.974: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 27 01:15:20.979: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 27 01:15:20.979: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 27 01:15:20.979: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 27 01:15:20.979: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 27 01:15:20.979: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 27 01:15:20.979: INFO: Container kindnet-cni ready: true, restart count 2 May 27 01:15:20.979: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 27 01:15:20.979: INFO: Container kube-proxy ready: true, restart count 0 May 27 01:15:20.979: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 27 01:15:21.009: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 27 01:15:21.009: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 27 01:15:21.009: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 27 01:15:21.009: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 27 01:15:21.009: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 27 01:15:21.009: INFO: Container kindnet-cni ready: true, restart count 2 May 27 01:15:21.009: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 27 01:15:21.009: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c532a281-5ed5-4765-934f-ef8b8c158680 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c532a281-5ed5-4765-934f-ef8b8c158680 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c532a281-5ed5-4765-934f-ef8b8c158680 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:15:31.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6223" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.268 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":287,"skipped":4790,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 27 01:15:31.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 27 01:15:31.245: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57b9f84b-c4ce-4565-9106-0924c739b628" in namespace "projected-5027" to be "Succeeded or Failed" May 27 01:15:31.248: INFO: Pod "downwardapi-volume-57b9f84b-c4ce-4565-9106-0924c739b628": Phase="Pending", Reason="", readiness=false. Elapsed: 3.072821ms May 27 01:15:33.253: INFO: Pod "downwardapi-volume-57b9f84b-c4ce-4565-9106-0924c739b628": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008055039s May 27 01:15:35.258: INFO: Pod "downwardapi-volume-57b9f84b-c4ce-4565-9106-0924c739b628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012812011s STEP: Saw pod success May 27 01:15:35.258: INFO: Pod "downwardapi-volume-57b9f84b-c4ce-4565-9106-0924c739b628" satisfied condition "Succeeded or Failed" May 27 01:15:35.262: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-57b9f84b-c4ce-4565-9106-0924c739b628 container client-container: STEP: delete the pod May 27 01:15:35.323: INFO: Waiting for pod downwardapi-volume-57b9f84b-c4ce-4565-9106-0924c739b628 to disappear May 27 01:15:35.332: INFO: Pod downwardapi-volume-57b9f84b-c4ce-4565-9106-0924c739b628 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 27 01:15:35.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5027" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4793,"failed":0} SSSSSSSSSSSSSSMay 27 01:15:35.341: INFO: Running AfterSuite actions on all nodes May 27 01:15:35.341: INFO: Running AfterSuite actions on node 1 May 27 01:15:35.341: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5828.078 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS