I0516 23:38:12.324409 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0516 23:38:12.324663 7 e2e.go:129] Starting e2e run "08629696-2499-4706-9fe8-af1fe331cacd" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589672291 - Will randomize all specs Will run 288 of 5095 specs May 16 23:38:12.388: INFO: >>> kubeConfig: /root/.kube/config May 16 23:38:12.392: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 16 23:38:12.415: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 16 23:38:12.451: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 16 23:38:12.451: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 16 23:38:12.451: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 16 23:38:12.462: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 16 23:38:12.462: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 16 23:38:12.462: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 16 23:38:12.463: INFO: kube-apiserver version: v1.18.2 May 16 23:38:12.463: INFO: >>> kubeConfig: /root/.kube/config May 16 23:38:12.468: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:38:12.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller May 16 23:38:12.546: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 23:38:12.550: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 16 23:38:14.595: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:38:16.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8242" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":1,"skipped":14,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:38:16.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:38:21.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2402" for this suite. • [SLOW TEST:5.030 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":14,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:38:21.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 16 23:38:29.856: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 23:38:29.883: INFO: Pod pod-with-prestop-exec-hook still exists May 16 23:38:31.884: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 23:38:31.888: INFO: Pod pod-with-prestop-exec-hook still exists May 16 23:38:33.884: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 16 23:38:33.898: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:38:33.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-830" for this suite. • [SLOW TEST:12.254 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":3,"skipped":16,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:38:33.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 23:38:33.990: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:38:35.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7630" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":4,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:38:35.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 16 23:38:39.908: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4947 pod-service-account-f3cdfb00-3e78-49c8-bf21-a07032e9a7b8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 16 23:38:43.045: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4947 pod-service-account-f3cdfb00-3e78-49c8-bf21-a07032e9a7b8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 16 23:38:43.290: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4947 pod-service-account-f3cdfb00-3e78-49c8-bf21-a07032e9a7b8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:38:43.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4947" for this suite. • [SLOW TEST:8.214 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":5,"skipped":71,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:38:43.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 16 23:38:43.639: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 23:38:43.658: INFO: Waiting for terminating namespaces to be deleted... May 16 23:38:43.661: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 16 23:38:43.668: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 16 23:38:43.668: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 16 23:38:43.668: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 16 23:38:43.669: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 16 23:38:43.669: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 23:38:43.669: INFO: Container kindnet-cni ready: true, restart count 0 May 16 23:38:43.669: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 23:38:43.669: INFO: Container kube-proxy ready: true, restart count 0 May 16 23:38:43.669: INFO: busybox-host-aliasesc02d599d-179f-4c9d-bb3b-ddfe46b96ff0 from kubelet-test-2402 started at 2020-05-16 23:38:17 +0000 UTC (1 container statuses recorded) May 16 23:38:43.669: INFO: Container busybox-host-aliasesc02d599d-179f-4c9d-bb3b-ddfe46b96ff0 ready: true, restart count 0 May 16 23:38:43.669: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 16 23:38:43.674: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 16 23:38:43.674: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 16 23:38:43.674: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 16 23:38:43.674: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 16 23:38:43.674: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 23:38:43.674: INFO: Container kindnet-cni ready: true, restart count 0 May 16 23:38:43.674: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 23:38:43.674: INFO: Container kube-proxy ready: true, restart count 0 May 16 23:38:43.674: INFO: pod-service-account-f3cdfb00-3e78-49c8-bf21-a07032e9a7b8 from svcaccounts-4947 started at 2020-05-16 23:38:35 +0000 UTC (1 container statuses recorded) May 16 23:38:43.674: INFO: Container test ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-380e875e-40d9-458b-98ac-e297de620b8c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-380e875e-40d9-458b-98ac-e297de620b8c off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-380e875e-40d9-458b-98ac-e297de620b8c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:39:01.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8525" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.410 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":6,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:39:01.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ababb567-d6d1-44ab-9f9c-a88045703718 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ababb567-d6d1-44ab-9f9c-a88045703718 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:39:08.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9342" for this suite. • [SLOW TEST:6.293 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:39:08.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-911d3f35-fc9b-4448-aefe-37fddbb1ed39 STEP: Creating a pod to test consume secrets May 16 23:39:08.617: INFO: Waiting up to 5m0s for pod "pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33" in namespace "secrets-1110" to be "Succeeded or Failed" May 16 23:39:08.635: INFO: Pod "pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33": Phase="Pending", Reason="", readiness=false. Elapsed: 17.613908ms May 16 23:39:10.639: INFO: Pod "pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021583718s May 16 23:39:12.643: INFO: Pod "pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025860855s May 16 23:39:14.648: INFO: Pod "pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030313358s STEP: Saw pod success May 16 23:39:14.648: INFO: Pod "pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33" satisfied condition "Succeeded or Failed" May 16 23:39:14.651: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33 container secret-env-test: STEP: delete the pod May 16 23:39:14.672: INFO: Waiting for pod pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33 to disappear May 16 23:39:14.688: INFO: Pod pod-secrets-c65a5f03-f9c7-4f9a-9bdb-6466d9228a33 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:39:14.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1110" for this suite. • [SLOW TEST:6.477 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":155,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:39:14.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9626 STEP: creating a selector STEP: Creating the service pods in kubernetes May 16 23:39:14.763: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 16 23:39:14.849: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 23:39:16.855: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 16 23:39:18.852: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 23:39:20.854: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 23:39:22.853: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 23:39:24.854: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 23:39:26.853: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 23:39:28.854: INFO: The status of Pod netserver-0 is Running (Ready = false) May 16 23:39:30.854: INFO: The status of Pod netserver-0 is Running (Ready = true) May 16 23:39:30.860: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 23:39:32.865: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 23:39:34.865: INFO: The status of Pod netserver-1 is Running (Ready = false) May 16 23:39:36.864: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 16 23:39:40.896: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.95:8080/dial?request=hostname&protocol=http&host=10.244.1.39&port=8080&tries=1'] Namespace:pod-network-test-9626 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 23:39:40.896: INFO: >>> kubeConfig: /root/.kube/config I0516 23:39:40.924547 7 log.go:172] (0xc0017dbad0) (0xc002185900) Create stream I0516 23:39:40.924625 7 log.go:172] (0xc0017dbad0) (0xc002185900) Stream added, broadcasting: 1 I0516 23:39:40.927110 7 log.go:172] (0xc0017dbad0) Reply frame received for 1 I0516 23:39:40.927136 7 log.go:172] (0xc0017dbad0) (0xc001c95a40) Create stream I0516 23:39:40.927153 7 log.go:172] (0xc0017dbad0) (0xc001c95a40) Stream added, broadcasting: 3 I0516 23:39:40.927998 7 log.go:172] (0xc0017dbad0) Reply frame received for 3 I0516 23:39:40.928022 7 log.go:172] (0xc0017dbad0) (0xc0021859a0) Create stream I0516 23:39:40.928034 7 log.go:172] (0xc0017dbad0) (0xc0021859a0) Stream added, broadcasting: 5 I0516 23:39:40.928877 7 log.go:172] (0xc0017dbad0) Reply frame received for 5 I0516 23:39:41.031454 7 log.go:172] (0xc0017dbad0) Data frame received for 3 I0516 23:39:41.031552 7 log.go:172] (0xc001c95a40) (3) Data frame handling I0516 23:39:41.031600 7 log.go:172] (0xc001c95a40) (3) Data frame sent I0516 23:39:41.031822 7 log.go:172] (0xc0017dbad0) Data frame received for 5 I0516 23:39:41.031854 7 log.go:172] (0xc0021859a0) (5) Data frame handling I0516 23:39:41.031882 7 log.go:172] (0xc0017dbad0) Data frame received for 3 I0516 23:39:41.031906 7 log.go:172] (0xc001c95a40) (3) Data frame handling I0516 23:39:41.033529 7 log.go:172] (0xc0017dbad0) Data frame received for 1 I0516 23:39:41.033554 7 log.go:172] (0xc002185900) (1) Data frame handling I0516 23:39:41.033575 7 log.go:172] (0xc002185900) (1) Data frame sent I0516 23:39:41.033597 7 log.go:172] (0xc0017dbad0) (0xc002185900) Stream removed, broadcasting: 1 I0516 23:39:41.033822 7 log.go:172] (0xc0017dbad0) Go away received I0516 23:39:41.033965 7 log.go:172] (0xc0017dbad0) (0xc002185900) Stream removed, broadcasting: 1 I0516 23:39:41.034043 7 log.go:172] (0xc0017dbad0) (0xc001c95a40) Stream removed, broadcasting: 3 I0516 23:39:41.034063 7 log.go:172] (0xc0017dbad0) (0xc0021859a0) Stream removed, broadcasting: 5 May 16 23:39:41.034: INFO: Waiting for responses: map[] May 16 23:39:41.037: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.95:8080/dial?request=hostname&protocol=http&host=10.244.2.94&port=8080&tries=1'] Namespace:pod-network-test-9626 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 23:39:41.037: INFO: >>> kubeConfig: /root/.kube/config I0516 23:39:41.069108 7 log.go:172] (0xc0024f3810) (0xc002126000) Create stream I0516 23:39:41.069317 7 log.go:172] (0xc0024f3810) (0xc002126000) Stream added, broadcasting: 1 I0516 23:39:41.072084 7 log.go:172] (0xc0024f3810) Reply frame received for 1 I0516 23:39:41.072122 7 log.go:172] (0xc0024f3810) (0xc0020dcfa0) Create stream I0516 23:39:41.072138 7 log.go:172] (0xc0024f3810) (0xc0020dcfa0) Stream added, broadcasting: 3 I0516 23:39:41.073865 7 log.go:172] (0xc0024f3810) Reply frame received for 3 I0516 23:39:41.073915 7 log.go:172] (0xc0024f3810) (0xc002185a40) Create stream I0516 23:39:41.073937 7 log.go:172] (0xc0024f3810) (0xc002185a40) Stream added, broadcasting: 5 I0516 23:39:41.075282 7 log.go:172] (0xc0024f3810) Reply frame received for 5 I0516 23:39:41.165742 7 log.go:172] (0xc0024f3810) Data frame received for 3 I0516 23:39:41.165778 7 log.go:172] (0xc0020dcfa0) (3) Data frame handling I0516 23:39:41.165807 7 log.go:172] (0xc0020dcfa0) (3) Data frame sent I0516 23:39:41.165882 7 log.go:172] (0xc0024f3810) Data frame received for 3 I0516 23:39:41.165895 7 log.go:172] (0xc0020dcfa0) (3) Data frame handling I0516 23:39:41.165906 7 log.go:172] (0xc0024f3810) Data frame received for 5 I0516 23:39:41.165914 7 log.go:172] (0xc002185a40) (5) Data frame handling I0516 23:39:41.167380 7 log.go:172] (0xc0024f3810) Data frame received for 1 I0516 23:39:41.167401 7 log.go:172] (0xc002126000) (1) Data frame handling I0516 23:39:41.167408 7 log.go:172] (0xc002126000) (1) Data frame sent I0516 23:39:41.167415 7 log.go:172] (0xc0024f3810) (0xc002126000) Stream removed, broadcasting: 1 I0516 23:39:41.167494 7 log.go:172] (0xc0024f3810) (0xc002126000) Stream removed, broadcasting: 1 I0516 23:39:41.167503 7 log.go:172] (0xc0024f3810) (0xc0020dcfa0) Stream removed, broadcasting: 3 I0516 23:39:41.167509 7 log.go:172] (0xc0024f3810) (0xc002185a40) Stream removed, broadcasting: 5 May 16 23:39:41.167: INFO: Waiting for responses: map[] I0516 23:39:41.167557 7 log.go:172] (0xc0024f3810) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:39:41.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9626" for this suite. • [SLOW TEST:26.478 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":9,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:39:41.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 16 23:39:41.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 16 23:39:41.474: INFO: stderr: "" May 16 23:39:41.474: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:39:41.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2637" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":10,"skipped":181,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:39:41.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 16 23:39:41.965: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 16 23:39:43.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269181, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269181, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269182, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269181, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 23:39:47.067: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 23:39:47.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:39:48.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-137" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.094 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":11,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:39:48.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 16 23:39:48.645: INFO: Waiting up to 5m0s for pod "pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88" in namespace "emptydir-1246" to be "Succeeded or Failed" May 16 23:39:48.701: INFO: Pod "pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88": Phase="Pending", Reason="", readiness=false. Elapsed: 56.881987ms May 16 23:39:50.744: INFO: Pod "pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099639787s May 16 23:39:52.749: INFO: Pod "pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88": Phase="Running", Reason="", readiness=true. Elapsed: 4.104038113s May 16 23:39:54.754: INFO: Pod "pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108967357s STEP: Saw pod success May 16 23:39:54.754: INFO: Pod "pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88" satisfied condition "Succeeded or Failed" May 16 23:39:54.757: INFO: Trying to get logs from node latest-worker pod pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88 container test-container: STEP: delete the pod May 16 23:39:54.834: INFO: Waiting for pod pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88 to disappear May 16 23:39:54.844: INFO: Pod pod-fdb44875-c73a-4ab7-ae7c-9b8920868e88 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:39:54.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1246" for this suite. • [SLOW TEST:6.321 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":211,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:39:54.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 23:39:55.576: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 23:39:57.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269195, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269195, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 23:39:59.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269195, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725269195, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 23:40:02.623: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 23:40:02.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6398-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:40:03.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7609" for this suite. STEP: Destroying namespace "webhook-7609-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.936 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":13,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:40:03.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-1296 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1296 to expose endpoints map[] May 16 23:40:03.986: INFO: Get endpoints failed (13.115131ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 16 23:40:04.988: INFO: successfully validated that service endpoint-test2 in namespace services-1296 exposes endpoints map[] (1.015530268s elapsed) STEP: Creating pod pod1 in namespace services-1296 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1296 to expose endpoints map[pod1:[80]] May 16 23:40:08.343: INFO: successfully validated that service endpoint-test2 in namespace services-1296 exposes endpoints map[pod1:[80]] (3.348751405s elapsed) STEP: Creating pod pod2 in namespace services-1296 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1296 to expose endpoints map[pod1:[80] pod2:[80]] May 16 23:40:12.520: INFO: successfully validated that service endpoint-test2 in namespace services-1296 exposes endpoints map[pod1:[80] pod2:[80]] (4.172368396s elapsed) STEP: Deleting pod pod1 in namespace services-1296 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1296 to expose endpoints map[pod2:[80]] May 16 23:40:13.609: INFO: successfully validated that service endpoint-test2 in namespace services-1296 exposes endpoints map[pod2:[80]] (1.084904658s elapsed) STEP: Deleting pod pod2 in namespace services-1296 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1296 to expose endpoints map[] May 16 23:40:14.623: INFO: successfully validated that service endpoint-test2 in namespace services-1296 exposes endpoints map[] (1.009202791s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:40:14.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1296" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.988 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":14,"skipped":269,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:40:14.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-bad3454e-ae5d-4166-b6eb-0e7cb0e34f4d STEP: Creating a pod to test consume configMaps May 16 23:40:14.967: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3c932d8-1fca-4b15-8191-5fffdd9d337f" in namespace "projected-4111" to be "Succeeded or Failed" May 16 23:40:15.001: INFO: Pod "pod-projected-configmaps-b3c932d8-1fca-4b15-8191-5fffdd9d337f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.645446ms May 16 23:40:17.005: INFO: Pod "pod-projected-configmaps-b3c932d8-1fca-4b15-8191-5fffdd9d337f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037695236s May 16 23:40:19.011: INFO: Pod "pod-projected-configmaps-b3c932d8-1fca-4b15-8191-5fffdd9d337f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043791509s STEP: Saw pod success May 16 23:40:19.011: INFO: Pod "pod-projected-configmaps-b3c932d8-1fca-4b15-8191-5fffdd9d337f" satisfied condition "Succeeded or Failed" May 16 23:40:19.014: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b3c932d8-1fca-4b15-8191-5fffdd9d337f container projected-configmap-volume-test: STEP: delete the pod May 16 23:40:19.220: INFO: Waiting for pod pod-projected-configmaps-b3c932d8-1fca-4b15-8191-5fffdd9d337f to disappear May 16 23:40:19.252: INFO: Pod pod-projected-configmaps-b3c932d8-1fca-4b15-8191-5fffdd9d337f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:40:19.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4111" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":283,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:40:19.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 16 23:40:19.343: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7553 /api/v1/namespaces/watch-7553/configmaps/e2e-watch-test-label-changed 7df2eda5-210b-47bc-a2ff-a79804bc1398 5272167 0 2020-05-16 23:40:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 23:40:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 16 23:40:19.343: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7553 /api/v1/namespaces/watch-7553/configmaps/e2e-watch-test-label-changed 7df2eda5-210b-47bc-a2ff-a79804bc1398 5272168 0 2020-05-16 23:40:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 23:40:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 23:40:19.344: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7553 /api/v1/namespaces/watch-7553/configmaps/e2e-watch-test-label-changed 7df2eda5-210b-47bc-a2ff-a79804bc1398 5272169 0 2020-05-16 23:40:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 23:40:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 16 23:40:29.430: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7553 /api/v1/namespaces/watch-7553/configmaps/e2e-watch-test-label-changed 7df2eda5-210b-47bc-a2ff-a79804bc1398 5272228 0 2020-05-16 23:40:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 23:40:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 23:40:29.430: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7553 /api/v1/namespaces/watch-7553/configmaps/e2e-watch-test-label-changed 7df2eda5-210b-47bc-a2ff-a79804bc1398 5272229 0 2020-05-16 23:40:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 23:40:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 23:40:29.430: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7553 /api/v1/namespaces/watch-7553/configmaps/e2e-watch-test-label-changed 7df2eda5-210b-47bc-a2ff-a79804bc1398 5272230 0 2020-05-16 23:40:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-16 23:40:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:40:29.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7553" for this suite. • [SLOW TEST:10.226 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":16,"skipped":303,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:40:29.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:40:29.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1776" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":17,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:40:29.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 16 23:40:29.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 16 23:40:29.776: INFO: stderr: "" May 16 23:40:29.776: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:40:29.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5467" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":18,"skipped":334,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:40:29.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-vwc9 STEP: Creating a pod to test atomic-volume-subpath May 16 23:40:29.965: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vwc9" in namespace "subpath-8046" to be "Succeeded or Failed" May 16 23:40:29.972: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.373273ms May 16 23:40:32.043: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077968417s May 16 23:40:34.047: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 4.082529891s May 16 23:40:36.052: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 6.087183207s May 16 23:40:38.057: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 8.092295795s May 16 23:40:40.062: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 10.096862255s May 16 23:40:42.066: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 12.101269828s May 16 23:40:44.071: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 14.106146214s May 16 23:40:46.076: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 16.110636031s May 16 23:40:48.079: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 18.11449499s May 16 23:40:50.097: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 20.132281005s May 16 23:40:52.102: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 22.136806601s May 16 23:40:54.105: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Running", Reason="", readiness=true. Elapsed: 24.140331479s May 16 23:40:56.110: INFO: Pod "pod-subpath-test-projected-vwc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.144698257s STEP: Saw pod success May 16 23:40:56.110: INFO: Pod "pod-subpath-test-projected-vwc9" satisfied condition "Succeeded or Failed" May 16 23:40:56.113: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-vwc9 container test-container-subpath-projected-vwc9: STEP: delete the pod May 16 23:40:56.167: INFO: Waiting for pod pod-subpath-test-projected-vwc9 to disappear May 16 23:40:56.172: INFO: Pod pod-subpath-test-projected-vwc9 no longer exists STEP: Deleting pod pod-subpath-test-projected-vwc9 May 16 23:40:56.172: INFO: Deleting pod "pod-subpath-test-projected-vwc9" in namespace "subpath-8046" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:40:56.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8046" for this suite. • [SLOW TEST:26.493 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":19,"skipped":334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:40:56.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1234 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 16 23:40:56.391: INFO: Found 0 stateful pods, waiting for 3 May 16 23:41:06.482: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 23:41:06.482: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 23:41:06.482: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 16 23:41:16.396: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 16 23:41:16.396: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 16 23:41:16.396: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 16 23:41:16.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1234 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 23:41:16.665: INFO: stderr: "I0516 23:41:16.539205 137 log.go:172] (0xc000abefd0) (0xc0006f1d60) Create stream\nI0516 23:41:16.539255 137 log.go:172] (0xc000abefd0) (0xc0006f1d60) Stream added, broadcasting: 1\nI0516 23:41:16.542769 137 log.go:172] (0xc000abefd0) Reply frame received for 1\nI0516 23:41:16.542809 137 log.go:172] (0xc000abefd0) (0xc0006e4f00) Create stream\nI0516 23:41:16.542821 137 log.go:172] (0xc000abefd0) (0xc0006e4f00) Stream added, broadcasting: 3\nI0516 23:41:16.543651 137 log.go:172] (0xc000abefd0) Reply frame received for 3\nI0516 23:41:16.543679 137 log.go:172] (0xc000abefd0) (0xc0006bc640) Create stream\nI0516 23:41:16.543687 137 log.go:172] (0xc000abefd0) (0xc0006bc640) Stream added, broadcasting: 5\nI0516 23:41:16.544403 137 log.go:172] (0xc000abefd0) Reply frame received for 5\nI0516 23:41:16.626661 137 log.go:172] (0xc000abefd0) Data frame received for 5\nI0516 23:41:16.626699 137 log.go:172] (0xc0006bc640) (5) Data frame handling\nI0516 23:41:16.626727 137 log.go:172] (0xc0006bc640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 23:41:16.658480 137 log.go:172] (0xc000abefd0) Data frame received for 3\nI0516 23:41:16.658517 137 log.go:172] (0xc0006e4f00) (3) Data frame handling\nI0516 23:41:16.658541 137 log.go:172] (0xc0006e4f00) (3) Data frame sent\nI0516 23:41:16.658658 137 log.go:172] (0xc000abefd0) Data frame received for 5\nI0516 23:41:16.658682 137 log.go:172] (0xc0006bc640) (5) Data frame handling\nI0516 23:41:16.659260 137 log.go:172] (0xc000abefd0) Data frame received for 3\nI0516 23:41:16.659275 137 log.go:172] (0xc0006e4f00) (3) Data frame handling\nI0516 23:41:16.660145 137 log.go:172] (0xc000abefd0) Data frame received for 1\nI0516 23:41:16.660171 137 log.go:172] (0xc0006f1d60) (1) Data frame handling\nI0516 23:41:16.660185 137 log.go:172] (0xc0006f1d60) (1) Data frame sent\nI0516 23:41:16.660194 137 log.go:172] (0xc000abefd0) (0xc0006f1d60) Stream removed, broadcasting: 1\nI0516 23:41:16.660209 137 log.go:172] (0xc000abefd0) Go away received\nI0516 23:41:16.660657 137 log.go:172] (0xc000abefd0) (0xc0006f1d60) Stream removed, broadcasting: 1\nI0516 23:41:16.660674 137 log.go:172] (0xc000abefd0) (0xc0006e4f00) Stream removed, broadcasting: 3\nI0516 23:41:16.660682 137 log.go:172] (0xc000abefd0) (0xc0006bc640) Stream removed, broadcasting: 5\n" May 16 23:41:16.665: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 23:41:16.665: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 16 23:41:26.730: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 16 23:41:36.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1234 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:41:37.031: INFO: stderr: "I0516 23:41:36.945635 157 log.go:172] (0xc000a0c000) (0xc0006a1a40) Create stream\nI0516 23:41:36.945691 157 log.go:172] (0xc000a0c000) (0xc0006a1a40) Stream added, broadcasting: 1\nI0516 23:41:36.947570 157 log.go:172] (0xc000a0c000) Reply frame received for 1\nI0516 23:41:36.947607 157 log.go:172] (0xc000a0c000) (0xc000674b40) Create stream\nI0516 23:41:36.947615 157 log.go:172] (0xc000a0c000) (0xc000674b40) Stream added, broadcasting: 3\nI0516 23:41:36.948719 157 log.go:172] (0xc000a0c000) Reply frame received for 3\nI0516 23:41:36.948762 157 log.go:172] (0xc000a0c000) (0xc000675ae0) Create stream\nI0516 23:41:36.948784 157 log.go:172] (0xc000a0c000) (0xc000675ae0) Stream added, broadcasting: 5\nI0516 23:41:36.949800 157 log.go:172] (0xc000a0c000) Reply frame received for 5\nI0516 23:41:37.023598 157 log.go:172] (0xc000a0c000) Data frame received for 3\nI0516 23:41:37.023629 157 log.go:172] (0xc000674b40) (3) Data frame handling\nI0516 23:41:37.023636 157 log.go:172] (0xc000674b40) (3) Data frame sent\nI0516 23:41:37.023641 157 log.go:172] (0xc000a0c000) Data frame received for 3\nI0516 23:41:37.023645 157 log.go:172] (0xc000674b40) (3) Data frame handling\nI0516 23:41:37.023681 157 log.go:172] (0xc000a0c000) Data frame received for 5\nI0516 23:41:37.023708 157 log.go:172] (0xc000675ae0) (5) Data frame handling\nI0516 23:41:37.023737 157 log.go:172] (0xc000675ae0) (5) Data frame sent\nI0516 23:41:37.023754 157 log.go:172] (0xc000a0c000) Data frame received for 5\nI0516 23:41:37.023765 157 log.go:172] (0xc000675ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 23:41:37.024986 157 log.go:172] (0xc000a0c000) Data frame received for 1\nI0516 23:41:37.025024 157 log.go:172] (0xc0006a1a40) (1) Data frame handling\nI0516 23:41:37.025055 157 log.go:172] (0xc0006a1a40) (1) Data frame sent\nI0516 23:41:37.025085 157 log.go:172] (0xc000a0c000) (0xc0006a1a40) Stream removed, broadcasting: 1\nI0516 23:41:37.025108 157 log.go:172] (0xc000a0c000) Go away received\nI0516 23:41:37.025869 157 log.go:172] (0xc000a0c000) (0xc0006a1a40) Stream removed, broadcasting: 1\nI0516 23:41:37.025904 157 log.go:172] (0xc000a0c000) (0xc000674b40) Stream removed, broadcasting: 3\nI0516 23:41:37.025928 157 log.go:172] (0xc000a0c000) (0xc000675ae0) Stream removed, broadcasting: 5\n" May 16 23:41:37.031: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 23:41:37.031: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 23:41:47.050: INFO: Waiting for StatefulSet statefulset-1234/ss2 to complete update May 16 23:41:47.051: INFO: Waiting for Pod statefulset-1234/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 16 23:41:47.051: INFO: Waiting for Pod statefulset-1234/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 16 23:41:57.059: INFO: Waiting for StatefulSet statefulset-1234/ss2 to complete update May 16 23:41:57.059: INFO: Waiting for Pod statefulset-1234/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 16 23:42:07.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1234 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 23:42:07.342: INFO: stderr: "I0516 23:42:07.198658 179 log.go:172] (0xc0006e6bb0) (0xc0004e90e0) Create stream\nI0516 23:42:07.198747 179 log.go:172] (0xc0006e6bb0) (0xc0004e90e0) Stream added, broadcasting: 1\nI0516 23:42:07.201066 179 log.go:172] (0xc0006e6bb0) Reply frame received for 1\nI0516 23:42:07.201105 179 log.go:172] (0xc0006e6bb0) (0xc0002f6c80) Create stream\nI0516 23:42:07.201302 179 log.go:172] (0xc0006e6bb0) (0xc0002f6c80) Stream added, broadcasting: 3\nI0516 23:42:07.202121 179 log.go:172] (0xc0006e6bb0) Reply frame received for 3\nI0516 23:42:07.202145 179 log.go:172] (0xc0006e6bb0) (0xc000688460) Create stream\nI0516 23:42:07.202153 179 log.go:172] (0xc0006e6bb0) (0xc000688460) Stream added, broadcasting: 5\nI0516 23:42:07.203219 179 log.go:172] (0xc0006e6bb0) Reply frame received for 5\nI0516 23:42:07.300871 179 log.go:172] (0xc0006e6bb0) Data frame received for 5\nI0516 23:42:07.300914 179 log.go:172] (0xc000688460) (5) Data frame handling\nI0516 23:42:07.300939 179 log.go:172] (0xc000688460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 23:42:07.333871 179 log.go:172] (0xc0006e6bb0) Data frame received for 3\nI0516 23:42:07.333903 179 log.go:172] (0xc0002f6c80) (3) Data frame handling\nI0516 23:42:07.333920 179 log.go:172] (0xc0002f6c80) (3) Data frame sent\nI0516 23:42:07.333992 179 log.go:172] (0xc0006e6bb0) Data frame received for 5\nI0516 23:42:07.334008 179 log.go:172] (0xc000688460) (5) Data frame handling\nI0516 23:42:07.334824 179 log.go:172] (0xc0006e6bb0) Data frame received for 3\nI0516 23:42:07.334866 179 log.go:172] (0xc0002f6c80) (3) Data frame handling\nI0516 23:42:07.336077 179 log.go:172] (0xc0006e6bb0) Data frame received for 1\nI0516 23:42:07.336109 179 log.go:172] (0xc0004e90e0) (1) Data frame handling\nI0516 23:42:07.336152 179 log.go:172] (0xc0004e90e0) (1) Data frame sent\nI0516 23:42:07.336272 179 log.go:172] (0xc0006e6bb0) (0xc0004e90e0) Stream removed, broadcasting: 1\nI0516 23:42:07.336304 179 log.go:172] (0xc0006e6bb0) Go away received\nI0516 23:42:07.336737 179 log.go:172] (0xc0006e6bb0) (0xc0004e90e0) Stream removed, broadcasting: 1\nI0516 23:42:07.336762 179 log.go:172] (0xc0006e6bb0) (0xc0002f6c80) Stream removed, broadcasting: 3\nI0516 23:42:07.336776 179 log.go:172] (0xc0006e6bb0) (0xc000688460) Stream removed, broadcasting: 5\n" May 16 23:42:07.343: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 23:42:07.343: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 23:42:17.373: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 16 23:42:27.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1234 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:42:27.618: INFO: stderr: "I0516 23:42:27.548505 200 log.go:172] (0xc000ad11e0) (0xc000519900) Create stream\nI0516 23:42:27.548569 200 log.go:172] (0xc000ad11e0) (0xc000519900) Stream added, broadcasting: 1\nI0516 23:42:27.551302 200 log.go:172] (0xc000ad11e0) Reply frame received for 1\nI0516 23:42:27.551352 200 log.go:172] (0xc000ad11e0) (0xc000428500) Create stream\nI0516 23:42:27.551367 200 log.go:172] (0xc000ad11e0) (0xc000428500) Stream added, broadcasting: 3\nI0516 23:42:27.552569 200 log.go:172] (0xc000ad11e0) Reply frame received for 3\nI0516 23:42:27.552604 200 log.go:172] (0xc000ad11e0) (0xc000428d20) Create stream\nI0516 23:42:27.552619 200 log.go:172] (0xc000ad11e0) (0xc000428d20) Stream added, broadcasting: 5\nI0516 23:42:27.553940 200 log.go:172] (0xc000ad11e0) Reply frame received for 5\nI0516 23:42:27.611615 200 log.go:172] (0xc000ad11e0) Data frame received for 5\nI0516 23:42:27.611675 200 log.go:172] (0xc000428d20) (5) Data frame handling\nI0516 23:42:27.611700 200 log.go:172] (0xc000428d20) (5) Data frame sent\nI0516 23:42:27.611721 200 log.go:172] (0xc000ad11e0) Data frame received for 5\nI0516 23:42:27.611736 200 log.go:172] (0xc000428d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 23:42:27.611770 200 log.go:172] (0xc000ad11e0) Data frame received for 3\nI0516 23:42:27.611818 200 log.go:172] (0xc000428500) (3) Data frame handling\nI0516 23:42:27.611847 200 log.go:172] (0xc000428500) (3) Data frame sent\nI0516 23:42:27.611862 200 log.go:172] (0xc000ad11e0) Data frame received for 3\nI0516 23:42:27.611870 200 log.go:172] (0xc000428500) (3) Data frame handling\nI0516 23:42:27.613081 200 log.go:172] (0xc000ad11e0) Data frame received for 1\nI0516 23:42:27.613098 200 log.go:172] (0xc000519900) (1) Data frame handling\nI0516 23:42:27.613264 200 log.go:172] (0xc000519900) (1) Data frame sent\nI0516 23:42:27.613289 200 log.go:172] (0xc000ad11e0) (0xc000519900) Stream removed, broadcasting: 1\nI0516 23:42:27.613305 200 log.go:172] (0xc000ad11e0) Go away received\nI0516 23:42:27.613751 200 log.go:172] (0xc000ad11e0) (0xc000519900) Stream removed, broadcasting: 1\nI0516 23:42:27.613772 200 log.go:172] (0xc000ad11e0) (0xc000428500) Stream removed, broadcasting: 3\nI0516 23:42:27.613785 200 log.go:172] (0xc000ad11e0) (0xc000428d20) Stream removed, broadcasting: 5\n" May 16 23:42:27.618: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 23:42:27.618: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 23:42:47.638: INFO: Waiting for StatefulSet statefulset-1234/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 16 23:42:57.647: INFO: Deleting all statefulset in ns statefulset-1234 May 16 23:42:57.650: INFO: Scaling statefulset ss2 to 0 May 16 23:43:17.686: INFO: Waiting for statefulset status.replicas updated to 0 May 16 23:43:17.714: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:43:17.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1234" for this suite. • [SLOW TEST:141.456 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":20,"skipped":399,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:43:17.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 23:43:17.784: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 16 23:43:20.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7406 create -f -' May 16 23:43:24.178: INFO: stderr: "" May 16 23:43:24.178: INFO: stdout: "e2e-test-crd-publish-openapi-7065-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 16 23:43:24.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7406 delete e2e-test-crd-publish-openapi-7065-crds test-cr' May 16 23:43:24.312: INFO: stderr: "" May 16 23:43:24.312: INFO: stdout: "e2e-test-crd-publish-openapi-7065-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 16 23:43:24.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7406 apply -f -' May 16 23:43:24.597: INFO: stderr: "" May 16 23:43:24.597: INFO: stdout: "e2e-test-crd-publish-openapi-7065-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 16 23:43:24.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7406 delete e2e-test-crd-publish-openapi-7065-crds test-cr' May 16 23:43:24.740: INFO: stderr: "" May 16 23:43:24.740: INFO: stdout: "e2e-test-crd-publish-openapi-7065-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 16 23:43:24.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7065-crds' May 16 23:43:25.014: INFO: stderr: "" May 16 23:43:25.014: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7065-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:43:27.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7406" for this suite. • [SLOW TEST:10.255 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":21,"skipped":404,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:43:27.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 16 23:43:28.087: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-234" to be "Succeeded or Failed" May 16 23:43:28.098: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.89887ms May 16 23:43:30.153: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065300182s May 16 23:43:32.157: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069906679s May 16 23:43:34.161: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073980017s STEP: Saw pod success May 16 23:43:34.161: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 16 23:43:34.164: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 16 23:43:34.227: INFO: Waiting for pod pod-host-path-test to disappear May 16 23:43:34.236: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:43:34.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-234" for this suite. • [SLOW TEST:6.272 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":420,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:43:34.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 16 23:43:34.368: INFO: Waiting up to 5m0s for pod "pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a" in namespace "emptydir-4048" to be "Succeeded or Failed" May 16 23:43:34.399: INFO: Pod "pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.893669ms May 16 23:43:36.440: INFO: Pod "pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07150118s May 16 23:43:38.444: INFO: Pod "pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a": Phase="Running", Reason="", readiness=true. Elapsed: 4.076143534s May 16 23:43:40.448: INFO: Pod "pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080147312s STEP: Saw pod success May 16 23:43:40.449: INFO: Pod "pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a" satisfied condition "Succeeded or Failed" May 16 23:43:40.451: INFO: Trying to get logs from node latest-worker2 pod pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a container test-container: STEP: delete the pod May 16 23:43:40.525: INFO: Waiting for pod pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a to disappear May 16 23:43:40.548: INFO: Pod pod-1e67e6e0-e37a-4c08-abb7-cdaa3ff62a8a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:43:40.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4048" for this suite. • [SLOW TEST:6.307 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:43:40.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-9b06ec22-f5d4-476a-8763-5b5dcda0ab60 in namespace container-probe-712 May 16 23:43:44.714: INFO: Started pod liveness-9b06ec22-f5d4-476a-8763-5b5dcda0ab60 in namespace container-probe-712 STEP: checking the pod's current state and verifying that restartCount is present May 16 23:43:44.717: INFO: Initial restart count of pod liveness-9b06ec22-f5d4-476a-8763-5b5dcda0ab60 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:47:45.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-712" for this suite. • [SLOW TEST:245.413 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":24,"skipped":479,"failed":0} S ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:47:45.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 16 23:47:46.134: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 16 23:47:46.144: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 16 23:47:46.144: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 16 23:47:46.174: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 16 23:47:46.174: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 16 23:47:46.408: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 16 23:47:46.408: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 16 23:47:53.808: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:47:53.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5191" for this suite. • [SLOW TEST:7.876 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":25,"skipped":480,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:47:53.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 23:49:54.011: INFO: Deleting pod "var-expansion-a4b358aa-b83c-4d98-861a-9a424e642e54" in namespace "var-expansion-3041" May 16 23:49:54.016: INFO: Wait up to 5m0s for pod "var-expansion-a4b358aa-b83c-4d98-861a-9a424e642e54" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:49:56.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3041" for this suite. • [SLOW TEST:122.177 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":26,"skipped":488,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:49:56.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 16 23:49:56.105: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 16 23:49:56.120: INFO: Waiting for terminating namespaces to be deleted... May 16 23:49:56.123: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 16 23:49:56.127: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 16 23:49:56.128: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 16 23:49:56.128: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 16 23:49:56.128: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 16 23:49:56.128: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 23:49:56.128: INFO: Container kindnet-cni ready: true, restart count 0 May 16 23:49:56.128: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 16 23:49:56.128: INFO: Container kube-proxy ready: true, restart count 0 May 16 23:49:56.128: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 16 23:49:56.133: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 16 23:49:56.133: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 16 23:49:56.133: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 16 23:49:56.133: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 16 23:49:56.133: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 23:49:56.133: INFO: Container kindnet-cni ready: true, restart count 0 May 16 23:49:56.133: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 16 23:49:56.133: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160fa72ae4d9e7ff], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160fa72ae653da4f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:49:57.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2703" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":27,"skipped":509,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:49:57.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-1864bd68-d10d-4678-bc68-1ef860f2d8bc [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:49:57.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6237" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":28,"skipped":512,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:49:57.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-546f5cf1-460c-428c-9cdb-a413719e5c1f STEP: Creating a pod to test consume configMaps May 16 23:49:57.343: INFO: Waiting up to 5m0s for pod "pod-configmaps-287f3b77-492f-41ac-8539-c18d1ddf57f1" in namespace "configmap-591" to be "Succeeded or Failed" May 16 23:49:57.388: INFO: Pod "pod-configmaps-287f3b77-492f-41ac-8539-c18d1ddf57f1": Phase="Pending", Reason="", readiness=false. Elapsed: 45.006232ms May 16 23:49:59.392: INFO: Pod "pod-configmaps-287f3b77-492f-41ac-8539-c18d1ddf57f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048922542s May 16 23:50:01.395: INFO: Pod "pod-configmaps-287f3b77-492f-41ac-8539-c18d1ddf57f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052063774s STEP: Saw pod success May 16 23:50:01.395: INFO: Pod "pod-configmaps-287f3b77-492f-41ac-8539-c18d1ddf57f1" satisfied condition "Succeeded or Failed" May 16 23:50:01.397: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-287f3b77-492f-41ac-8539-c18d1ddf57f1 container configmap-volume-test: STEP: delete the pod May 16 23:50:01.803: INFO: Waiting for pod pod-configmaps-287f3b77-492f-41ac-8539-c18d1ddf57f1 to disappear May 16 23:50:01.806: INFO: Pod pod-configmaps-287f3b77-492f-41ac-8539-c18d1ddf57f1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:50:01.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-591" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":516,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:50:01.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9609 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-9609 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9609 May 16 23:50:01.951: INFO: Found 0 stateful pods, waiting for 1 May 16 23:50:11.955: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 16 23:50:11.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 23:50:12.252: INFO: stderr: "I0516 23:50:12.122641 331 log.go:172] (0xc000abc210) (0xc00062a640) Create stream\nI0516 23:50:12.122697 331 log.go:172] (0xc000abc210) (0xc00062a640) Stream added, broadcasting: 1\nI0516 23:50:12.124772 331 log.go:172] (0xc000abc210) Reply frame received for 1\nI0516 23:50:12.124805 331 log.go:172] (0xc000abc210) (0xc000562320) Create stream\nI0516 23:50:12.124816 331 log.go:172] (0xc000abc210) (0xc000562320) Stream added, broadcasting: 3\nI0516 23:50:12.125858 331 log.go:172] (0xc000abc210) Reply frame received for 3\nI0516 23:50:12.125885 331 log.go:172] (0xc000abc210) (0xc000638dc0) Create stream\nI0516 23:50:12.125900 331 log.go:172] (0xc000abc210) (0xc000638dc0) Stream added, broadcasting: 5\nI0516 23:50:12.126761 331 log.go:172] (0xc000abc210) Reply frame received for 5\nI0516 23:50:12.201321 331 log.go:172] (0xc000abc210) Data frame received for 5\nI0516 23:50:12.201343 331 log.go:172] (0xc000638dc0) (5) Data frame handling\nI0516 23:50:12.201355 331 log.go:172] (0xc000638dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 23:50:12.244534 331 log.go:172] (0xc000abc210) Data frame received for 3\nI0516 23:50:12.244567 331 log.go:172] (0xc000562320) (3) Data frame handling\nI0516 23:50:12.244582 331 log.go:172] (0xc000562320) (3) Data frame sent\nI0516 23:50:12.244588 331 log.go:172] (0xc000abc210) Data frame received for 3\nI0516 23:50:12.244596 331 log.go:172] (0xc000562320) (3) Data frame handling\nI0516 23:50:12.244870 331 log.go:172] (0xc000abc210) Data frame received for 5\nI0516 23:50:12.244883 331 log.go:172] (0xc000638dc0) (5) Data frame handling\nI0516 23:50:12.246889 331 log.go:172] (0xc000abc210) Data frame received for 1\nI0516 23:50:12.246908 331 log.go:172] (0xc00062a640) (1) Data frame handling\nI0516 23:50:12.246921 331 log.go:172] (0xc00062a640) (1) Data frame sent\nI0516 23:50:12.246937 331 log.go:172] (0xc000abc210) (0xc00062a640) Stream removed, broadcasting: 1\nI0516 23:50:12.246977 331 log.go:172] (0xc000abc210) Go away received\nI0516 23:50:12.247366 331 log.go:172] (0xc000abc210) (0xc00062a640) Stream removed, broadcasting: 1\nI0516 23:50:12.247389 331 log.go:172] (0xc000abc210) (0xc000562320) Stream removed, broadcasting: 3\nI0516 23:50:12.247400 331 log.go:172] (0xc000abc210) (0xc000638dc0) Stream removed, broadcasting: 5\n" May 16 23:50:12.252: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 23:50:12.252: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 23:50:12.256: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 16 23:50:22.261: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 23:50:22.261: INFO: Waiting for statefulset status.replicas updated to 0 May 16 23:50:22.282: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:50:22.282: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:01 +0000 UTC }] May 16 23:50:22.282: INFO: May 16 23:50:22.282: INFO: StatefulSet ss has not reached scale 3, at 1 May 16 23:50:23.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989985604s May 16 23:50:24.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986054661s May 16 23:50:25.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.960110317s May 16 23:50:26.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.871828715s May 16 23:50:27.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.866083803s May 16 23:50:28.416: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.862137852s May 16 23:50:29.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.85588228s May 16 23:50:30.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.849394196s May 16 23:50:31.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 843.852988ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9609 May 16 23:50:32.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:50:32.676: INFO: stderr: "I0516 23:50:32.587393 351 log.go:172] (0xc00003ab00) (0xc00061cfa0) Create stream\nI0516 23:50:32.587462 351 log.go:172] (0xc00003ab00) (0xc00061cfa0) Stream added, broadcasting: 1\nI0516 23:50:32.590818 351 log.go:172] (0xc00003ab00) Reply frame received for 1\nI0516 23:50:32.590859 351 log.go:172] (0xc00003ab00) (0xc0005281e0) Create stream\nI0516 23:50:32.590871 351 log.go:172] (0xc00003ab00) (0xc0005281e0) Stream added, broadcasting: 3\nI0516 23:50:32.591710 351 log.go:172] (0xc00003ab00) Reply frame received for 3\nI0516 23:50:32.591745 351 log.go:172] (0xc00003ab00) (0xc000432d20) Create stream\nI0516 23:50:32.591756 351 log.go:172] (0xc00003ab00) (0xc000432d20) Stream added, broadcasting: 5\nI0516 23:50:32.592823 351 log.go:172] (0xc00003ab00) Reply frame received for 5\nI0516 23:50:32.670113 351 log.go:172] (0xc00003ab00) Data frame received for 5\nI0516 23:50:32.670152 351 log.go:172] (0xc000432d20) (5) Data frame handling\nI0516 23:50:32.670171 351 log.go:172] (0xc000432d20) (5) Data frame sent\nI0516 23:50:32.670184 351 log.go:172] (0xc00003ab00) Data frame received for 5\nI0516 23:50:32.670196 351 log.go:172] (0xc000432d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0516 23:50:32.670229 351 log.go:172] (0xc00003ab00) Data frame received for 3\nI0516 23:50:32.670264 351 log.go:172] (0xc0005281e0) (3) Data frame handling\nI0516 23:50:32.670284 351 log.go:172] (0xc0005281e0) (3) Data frame sent\nI0516 23:50:32.670295 351 log.go:172] (0xc00003ab00) Data frame received for 3\nI0516 23:50:32.670307 351 log.go:172] (0xc0005281e0) (3) Data frame handling\nI0516 23:50:32.671516 351 log.go:172] (0xc00003ab00) Data frame received for 1\nI0516 23:50:32.671544 351 log.go:172] (0xc00061cfa0) (1) Data frame handling\nI0516 23:50:32.671575 351 log.go:172] (0xc00061cfa0) (1) Data frame sent\nI0516 23:50:32.671598 351 log.go:172] (0xc00003ab00) (0xc00061cfa0) Stream removed, broadcasting: 1\nI0516 23:50:32.671700 351 log.go:172] (0xc00003ab00) Go away received\nI0516 23:50:32.672032 351 log.go:172] (0xc00003ab00) (0xc00061cfa0) Stream removed, broadcasting: 1\nI0516 23:50:32.672059 351 log.go:172] (0xc00003ab00) (0xc0005281e0) Stream removed, broadcasting: 3\nI0516 23:50:32.672075 351 log.go:172] (0xc00003ab00) (0xc000432d20) Stream removed, broadcasting: 5\n" May 16 23:50:32.676: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 23:50:32.676: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 23:50:32.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:50:32.882: INFO: stderr: "I0516 23:50:32.819752 369 log.go:172] (0xc000b5dd90) (0xc00072a500) Create stream\nI0516 23:50:32.819824 369 log.go:172] (0xc000b5dd90) (0xc00072a500) Stream added, broadcasting: 1\nI0516 23:50:32.822666 369 log.go:172] (0xc000b5dd90) Reply frame received for 1\nI0516 23:50:32.822716 369 log.go:172] (0xc000b5dd90) (0xc000692e60) Create stream\nI0516 23:50:32.822727 369 log.go:172] (0xc000b5dd90) (0xc000692e60) Stream added, broadcasting: 3\nI0516 23:50:32.823723 369 log.go:172] (0xc000b5dd90) Reply frame received for 3\nI0516 23:50:32.823774 369 log.go:172] (0xc000b5dd90) (0xc000736dc0) Create stream\nI0516 23:50:32.823807 369 log.go:172] (0xc000b5dd90) (0xc000736dc0) Stream added, broadcasting: 5\nI0516 23:50:32.824807 369 log.go:172] (0xc000b5dd90) Reply frame received for 5\nI0516 23:50:32.874569 369 log.go:172] (0xc000b5dd90) Data frame received for 3\nI0516 23:50:32.874611 369 log.go:172] (0xc000692e60) (3) Data frame handling\nI0516 23:50:32.874636 369 log.go:172] (0xc000692e60) (3) Data frame sent\nI0516 23:50:32.874750 369 log.go:172] (0xc000b5dd90) Data frame received for 5\nI0516 23:50:32.874772 369 log.go:172] (0xc000736dc0) (5) Data frame handling\nI0516 23:50:32.874793 369 log.go:172] (0xc000736dc0) (5) Data frame sent\nI0516 23:50:32.874814 369 log.go:172] (0xc000b5dd90) Data frame received for 5\nI0516 23:50:32.874833 369 log.go:172] (0xc000736dc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0516 23:50:32.874869 369 log.go:172] (0xc000736dc0) (5) Data frame sent\nI0516 23:50:32.874886 369 log.go:172] (0xc000b5dd90) Data frame received for 5\nI0516 23:50:32.874899 369 log.go:172] (0xc000736dc0) (5) Data frame handling\nI0516 23:50:32.874921 369 log.go:172] (0xc000b5dd90) Data frame received for 3\nI0516 23:50:32.874931 369 log.go:172] (0xc000692e60) (3) Data frame handling\nI0516 23:50:32.876437 369 log.go:172] (0xc000b5dd90) Data frame received for 1\nI0516 23:50:32.876462 369 log.go:172] (0xc00072a500) (1) Data frame handling\nI0516 23:50:32.876482 369 log.go:172] (0xc00072a500) (1) Data frame sent\nI0516 23:50:32.876502 369 log.go:172] (0xc000b5dd90) (0xc00072a500) Stream removed, broadcasting: 1\nI0516 23:50:32.876518 369 log.go:172] (0xc000b5dd90) Go away received\nI0516 23:50:32.877383 369 log.go:172] (0xc000b5dd90) (0xc00072a500) Stream removed, broadcasting: 1\nI0516 23:50:32.877407 369 log.go:172] (0xc000b5dd90) (0xc000692e60) Stream removed, broadcasting: 3\nI0516 23:50:32.877420 369 log.go:172] (0xc000b5dd90) (0xc000736dc0) Stream removed, broadcasting: 5\n" May 16 23:50:32.882: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 23:50:32.882: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 23:50:32.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:50:33.076: INFO: stderr: "I0516 23:50:33.008923 389 log.go:172] (0xc00003b6b0) (0xc000acc640) Create stream\nI0516 23:50:33.008985 389 log.go:172] (0xc00003b6b0) (0xc000acc640) Stream added, broadcasting: 1\nI0516 23:50:33.011787 389 log.go:172] (0xc00003b6b0) Reply frame received for 1\nI0516 23:50:33.011916 389 log.go:172] (0xc00003b6b0) (0xc000604780) Create stream\nI0516 23:50:33.011966 389 log.go:172] (0xc00003b6b0) (0xc000604780) Stream added, broadcasting: 3\nI0516 23:50:33.013964 389 log.go:172] (0xc00003b6b0) Reply frame received for 3\nI0516 23:50:33.014028 389 log.go:172] (0xc00003b6b0) (0xc0006fc5a0) Create stream\nI0516 23:50:33.014047 389 log.go:172] (0xc00003b6b0) (0xc0006fc5a0) Stream added, broadcasting: 5\nI0516 23:50:33.015303 389 log.go:172] (0xc00003b6b0) Reply frame received for 5\nI0516 23:50:33.069585 389 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0516 23:50:33.069800 389 log.go:172] (0xc0006fc5a0) (5) Data frame handling\nI0516 23:50:33.069864 389 log.go:172] (0xc0006fc5a0) (5) Data frame sent\nI0516 23:50:33.069882 389 log.go:172] (0xc00003b6b0) Data frame received for 5\nI0516 23:50:33.069890 389 log.go:172] (0xc0006fc5a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0516 23:50:33.069911 389 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0516 23:50:33.069918 389 log.go:172] (0xc000604780) (3) Data frame handling\nI0516 23:50:33.069927 389 log.go:172] (0xc000604780) (3) Data frame sent\nI0516 23:50:33.069934 389 log.go:172] (0xc00003b6b0) Data frame received for 3\nI0516 23:50:33.069940 389 log.go:172] (0xc000604780) (3) Data frame handling\nI0516 23:50:33.071527 389 log.go:172] (0xc00003b6b0) Data frame received for 1\nI0516 23:50:33.071549 389 log.go:172] (0xc000acc640) (1) Data frame handling\nI0516 23:50:33.071707 389 log.go:172] (0xc000acc640) (1) Data frame sent\nI0516 23:50:33.071730 389 log.go:172] (0xc00003b6b0) (0xc000acc640) Stream removed, broadcasting: 1\nI0516 23:50:33.071766 389 log.go:172] (0xc00003b6b0) Go away received\nI0516 23:50:33.071996 389 log.go:172] (0xc00003b6b0) (0xc000acc640) Stream removed, broadcasting: 1\nI0516 23:50:33.072008 389 log.go:172] (0xc00003b6b0) (0xc000604780) Stream removed, broadcasting: 3\nI0516 23:50:33.072013 389 log.go:172] (0xc00003b6b0) (0xc0006fc5a0) Stream removed, broadcasting: 5\n" May 16 23:50:33.076: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 16 23:50:33.076: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 16 23:50:33.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 16 23:50:43.085: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 16 23:50:43.085: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 16 23:50:43.085: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 16 23:50:43.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 23:50:43.325: INFO: stderr: "I0516 23:50:43.224808 409 log.go:172] (0xc00003a840) (0xc00052cdc0) Create stream\nI0516 23:50:43.224869 409 log.go:172] (0xc00003a840) (0xc00052cdc0) Stream added, broadcasting: 1\nI0516 23:50:43.227299 409 log.go:172] (0xc00003a840) Reply frame received for 1\nI0516 23:50:43.227329 409 log.go:172] (0xc00003a840) (0xc00014fae0) Create stream\nI0516 23:50:43.227338 409 log.go:172] (0xc00003a840) (0xc00014fae0) Stream added, broadcasting: 3\nI0516 23:50:43.228305 409 log.go:172] (0xc00003a840) Reply frame received for 3\nI0516 23:50:43.228375 409 log.go:172] (0xc00003a840) (0xc0006dcf00) Create stream\nI0516 23:50:43.228406 409 log.go:172] (0xc00003a840) (0xc0006dcf00) Stream added, broadcasting: 5\nI0516 23:50:43.229602 409 log.go:172] (0xc00003a840) Reply frame received for 5\nI0516 23:50:43.317871 409 log.go:172] (0xc00003a840) Data frame received for 3\nI0516 23:50:43.317945 409 log.go:172] (0xc00014fae0) (3) Data frame handling\nI0516 23:50:43.317971 409 log.go:172] (0xc00014fae0) (3) Data frame sent\nI0516 23:50:43.317992 409 log.go:172] (0xc00003a840) Data frame received for 3\nI0516 23:50:43.318011 409 log.go:172] (0xc00014fae0) (3) Data frame handling\nI0516 23:50:43.318062 409 log.go:172] (0xc00003a840) Data frame received for 5\nI0516 23:50:43.318083 409 log.go:172] (0xc0006dcf00) (5) Data frame handling\nI0516 23:50:43.318118 409 log.go:172] (0xc0006dcf00) (5) Data frame sent\nI0516 23:50:43.318147 409 log.go:172] (0xc00003a840) Data frame received for 5\nI0516 23:50:43.318159 409 log.go:172] (0xc0006dcf00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 23:50:43.319511 409 log.go:172] (0xc00003a840) Data frame received for 1\nI0516 23:50:43.319539 409 log.go:172] (0xc00052cdc0) (1) Data frame handling\nI0516 23:50:43.319548 409 log.go:172] (0xc00052cdc0) (1) Data frame sent\nI0516 23:50:43.319559 409 log.go:172] (0xc00003a840) (0xc00052cdc0) Stream removed, broadcasting: 1\nI0516 23:50:43.319573 409 log.go:172] (0xc00003a840) Go away received\nI0516 23:50:43.319980 409 log.go:172] (0xc00003a840) (0xc00052cdc0) Stream removed, broadcasting: 1\nI0516 23:50:43.320004 409 log.go:172] (0xc00003a840) (0xc00014fae0) Stream removed, broadcasting: 3\nI0516 23:50:43.320016 409 log.go:172] (0xc00003a840) (0xc0006dcf00) Stream removed, broadcasting: 5\n" May 16 23:50:43.325: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 23:50:43.325: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 23:50:43.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 23:50:43.598: INFO: stderr: "I0516 23:50:43.488959 430 log.go:172] (0xc000893290) (0xc000b0e3c0) Create stream\nI0516 23:50:43.489016 430 log.go:172] (0xc000893290) (0xc000b0e3c0) Stream added, broadcasting: 1\nI0516 23:50:43.501388 430 log.go:172] (0xc000893290) Reply frame received for 1\nI0516 23:50:43.501447 430 log.go:172] (0xc000893290) (0xc000572280) Create stream\nI0516 23:50:43.501468 430 log.go:172] (0xc000893290) (0xc000572280) Stream added, broadcasting: 3\nI0516 23:50:43.503964 430 log.go:172] (0xc000893290) Reply frame received for 3\nI0516 23:50:43.503993 430 log.go:172] (0xc000893290) (0xc000554dc0) Create stream\nI0516 23:50:43.504009 430 log.go:172] (0xc000893290) (0xc000554dc0) Stream added, broadcasting: 5\nI0516 23:50:43.504739 430 log.go:172] (0xc000893290) Reply frame received for 5\nI0516 23:50:43.555794 430 log.go:172] (0xc000893290) Data frame received for 5\nI0516 23:50:43.555825 430 log.go:172] (0xc000554dc0) (5) Data frame handling\nI0516 23:50:43.555842 430 log.go:172] (0xc000554dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 23:50:43.590671 430 log.go:172] (0xc000893290) Data frame received for 3\nI0516 23:50:43.590705 430 log.go:172] (0xc000572280) (3) Data frame handling\nI0516 23:50:43.590728 430 log.go:172] (0xc000572280) (3) Data frame sent\nI0516 23:50:43.590748 430 log.go:172] (0xc000893290) Data frame received for 3\nI0516 23:50:43.590780 430 log.go:172] (0xc000572280) (3) Data frame handling\nI0516 23:50:43.590881 430 log.go:172] (0xc000893290) Data frame received for 5\nI0516 23:50:43.590910 430 log.go:172] (0xc000554dc0) (5) Data frame handling\nI0516 23:50:43.592725 430 log.go:172] (0xc000893290) Data frame received for 1\nI0516 23:50:43.592751 430 log.go:172] (0xc000b0e3c0) (1) Data frame handling\nI0516 23:50:43.592767 430 log.go:172] (0xc000b0e3c0) (1) Data frame sent\nI0516 23:50:43.592783 430 log.go:172] (0xc000893290) (0xc000b0e3c0) Stream removed, broadcasting: 1\nI0516 23:50:43.592816 430 log.go:172] (0xc000893290) Go away received\nI0516 23:50:43.593350 430 log.go:172] (0xc000893290) (0xc000b0e3c0) Stream removed, broadcasting: 1\nI0516 23:50:43.593376 430 log.go:172] (0xc000893290) (0xc000572280) Stream removed, broadcasting: 3\nI0516 23:50:43.593394 430 log.go:172] (0xc000893290) (0xc000554dc0) Stream removed, broadcasting: 5\n" May 16 23:50:43.598: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 23:50:43.598: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 23:50:43.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 16 23:50:43.891: INFO: stderr: "I0516 23:50:43.781451 450 log.go:172] (0xc000a474a0) (0xc000704e60) Create stream\nI0516 23:50:43.781555 450 log.go:172] (0xc000a474a0) (0xc000704e60) Stream added, broadcasting: 1\nI0516 23:50:43.784586 450 log.go:172] (0xc000a474a0) Reply frame received for 1\nI0516 23:50:43.784627 450 log.go:172] (0xc000a474a0) (0xc00062bc20) Create stream\nI0516 23:50:43.784636 450 log.go:172] (0xc000a474a0) (0xc00062bc20) Stream added, broadcasting: 3\nI0516 23:50:43.786089 450 log.go:172] (0xc000a474a0) Reply frame received for 3\nI0516 23:50:43.786111 450 log.go:172] (0xc000a474a0) (0xc000ae6280) Create stream\nI0516 23:50:43.786118 450 log.go:172] (0xc000a474a0) (0xc000ae6280) Stream added, broadcasting: 5\nI0516 23:50:43.787204 450 log.go:172] (0xc000a474a0) Reply frame received for 5\nI0516 23:50:43.849035 450 log.go:172] (0xc000a474a0) Data frame received for 5\nI0516 23:50:43.849060 450 log.go:172] (0xc000ae6280) (5) Data frame handling\nI0516 23:50:43.849076 450 log.go:172] (0xc000ae6280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0516 23:50:43.883161 450 log.go:172] (0xc000a474a0) Data frame received for 5\nI0516 23:50:43.883209 450 log.go:172] (0xc000ae6280) (5) Data frame handling\nI0516 23:50:43.883239 450 log.go:172] (0xc000a474a0) Data frame received for 3\nI0516 23:50:43.883248 450 log.go:172] (0xc00062bc20) (3) Data frame handling\nI0516 23:50:43.883260 450 log.go:172] (0xc00062bc20) (3) Data frame sent\nI0516 23:50:43.883271 450 log.go:172] (0xc000a474a0) Data frame received for 3\nI0516 23:50:43.883280 450 log.go:172] (0xc00062bc20) (3) Data frame handling\nI0516 23:50:43.885301 450 log.go:172] (0xc000a474a0) Data frame received for 1\nI0516 23:50:43.885337 450 log.go:172] (0xc000704e60) (1) Data frame handling\nI0516 23:50:43.885356 450 log.go:172] (0xc000704e60) (1) Data frame sent\nI0516 23:50:43.885382 450 log.go:172] (0xc000a474a0) (0xc000704e60) Stream removed, broadcasting: 1\nI0516 23:50:43.885558 450 log.go:172] (0xc000a474a0) Go away received\nI0516 23:50:43.885830 450 log.go:172] (0xc000a474a0) (0xc000704e60) Stream removed, broadcasting: 1\nI0516 23:50:43.885854 450 log.go:172] (0xc000a474a0) (0xc00062bc20) Stream removed, broadcasting: 3\nI0516 23:50:43.885865 450 log.go:172] (0xc000a474a0) (0xc000ae6280) Stream removed, broadcasting: 5\n" May 16 23:50:43.891: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 16 23:50:43.891: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 16 23:50:43.891: INFO: Waiting for statefulset status.replicas updated to 0 May 16 23:50:43.895: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 16 23:50:53.903: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 16 23:50:53.903: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 16 23:50:53.903: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 16 23:50:53.916: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:50:53.916: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:01 +0000 UTC }] May 16 23:50:53.916: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:53.916: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:53.916: INFO: May 16 23:50:53.916: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 23:50:54.920: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:50:54.920: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:01 +0000 UTC }] May 16 23:50:54.920: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:54.920: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:54.921: INFO: May 16 23:50:54.921: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 23:50:55.925: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:50:55.925: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:01 +0000 UTC }] May 16 23:50:55.925: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:55.925: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:55.925: INFO: May 16 23:50:55.925: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 23:50:56.930: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:50:56.930: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:01 +0000 UTC }] May 16 23:50:56.930: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:56.930: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:56.931: INFO: May 16 23:50:56.931: INFO: StatefulSet ss has not reached scale 0, at 3 May 16 23:50:57.935: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:50:57.935: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:57.935: INFO: May 16 23:50:57.935: INFO: StatefulSet ss has not reached scale 0, at 1 May 16 23:50:58.940: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:50:58.941: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:58.941: INFO: May 16 23:50:58.941: INFO: StatefulSet ss has not reached scale 0, at 1 May 16 23:50:59.946: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:50:59.946: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:50:59.946: INFO: May 16 23:50:59.946: INFO: StatefulSet ss has not reached scale 0, at 1 May 16 23:51:00.950: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:51:00.950: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:51:00.950: INFO: May 16 23:51:00.950: INFO: StatefulSet ss has not reached scale 0, at 1 May 16 23:51:01.983: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:51:01.983: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:51:01.983: INFO: May 16 23:51:01.983: INFO: StatefulSet ss has not reached scale 0, at 1 May 16 23:51:02.987: INFO: POD NODE PHASE GRACE CONDITIONS May 16 23:51:02.987: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-16 23:50:22 +0000 UTC }] May 16 23:51:02.987: INFO: May 16 23:51:02.987: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9609 May 16 23:51:03.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:51:04.130: INFO: rc: 1 May 16 23:51:04.131: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 16 23:51:14.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:51:14.233: INFO: rc: 1 May 16 23:51:14.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:51:24.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:51:24.364: INFO: rc: 1 May 16 23:51:24.365: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:51:34.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:51:34.460: INFO: rc: 1 May 16 23:51:34.460: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:51:44.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:51:44.556: INFO: rc: 1 May 16 23:51:44.556: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:51:54.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:51:54.657: INFO: rc: 1 May 16 23:51:54.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:52:04.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:52:04.758: INFO: rc: 1 May 16 23:52:04.758: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:52:14.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:52:14.869: INFO: rc: 1 May 16 23:52:14.869: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:52:24.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:52:24.962: INFO: rc: 1 May 16 23:52:24.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:52:34.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:52:35.069: INFO: rc: 1 May 16 23:52:35.070: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:52:45.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:52:45.179: INFO: rc: 1 May 16 23:52:45.179: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:52:55.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:52:55.282: INFO: rc: 1 May 16 23:52:55.282: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:53:05.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:53:05.400: INFO: rc: 1 May 16 23:53:05.400: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:53:15.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:53:15.507: INFO: rc: 1 May 16 23:53:15.507: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:53:25.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:53:28.562: INFO: rc: 1 May 16 23:53:28.562: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:53:38.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:53:38.667: INFO: rc: 1 May 16 23:53:38.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:53:48.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:53:48.774: INFO: rc: 1 May 16 23:53:48.774: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:53:58.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:53:58.879: INFO: rc: 1 May 16 23:53:58.879: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:54:08.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:54:09.012: INFO: rc: 1 May 16 23:54:09.012: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:54:19.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:54:19.121: INFO: rc: 1 May 16 23:54:19.121: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:54:29.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:54:29.226: INFO: rc: 1 May 16 23:54:29.226: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:54:39.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:54:39.342: INFO: rc: 1 May 16 23:54:39.342: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:54:49.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:54:49.452: INFO: rc: 1 May 16 23:54:49.452: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:54:59.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:54:59.557: INFO: rc: 1 May 16 23:54:59.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:55:09.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:55:09.654: INFO: rc: 1 May 16 23:55:09.654: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:55:19.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:55:19.768: INFO: rc: 1 May 16 23:55:19.768: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:55:29.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:55:29.873: INFO: rc: 1 May 16 23:55:29.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:55:39.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:55:39.980: INFO: rc: 1 May 16 23:55:39.980: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:55:49.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:55:50.086: INFO: rc: 1 May 16 23:55:50.086: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:56:00.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:56:00.180: INFO: rc: 1 May 16 23:56:00.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 16 23:56:10.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9609 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 16 23:56:10.293: INFO: rc: 1 May 16 23:56:10.293: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: May 16 23:56:10.293: INFO: Scaling statefulset ss to 0 May 16 23:56:10.303: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 16 23:56:10.305: INFO: Deleting all statefulset in ns statefulset-9609 May 16 23:56:10.308: INFO: Scaling statefulset ss to 0 May 16 23:56:10.316: INFO: Waiting for statefulset status.replicas updated to 0 May 16 23:56:10.318: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:56:10.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9609" for this suite. • [SLOW TEST:368.521 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":30,"skipped":521,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:56:10.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6092 May 16 23:56:14.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6092 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 16 23:56:14.690: INFO: stderr: "I0516 23:56:14.560156 1092 log.go:172] (0xc000acb4a0) (0xc000aae5a0) Create stream\nI0516 23:56:14.560216 1092 log.go:172] (0xc000acb4a0) (0xc000aae5a0) Stream added, broadcasting: 1\nI0516 23:56:14.566630 1092 log.go:172] (0xc000acb4a0) Reply frame received for 1\nI0516 23:56:14.566679 1092 log.go:172] (0xc000acb4a0) (0xc000838000) Create stream\nI0516 23:56:14.566692 1092 log.go:172] (0xc000acb4a0) (0xc000838000) Stream added, broadcasting: 3\nI0516 23:56:14.567604 1092 log.go:172] (0xc000acb4a0) Reply frame received for 3\nI0516 23:56:14.567653 1092 log.go:172] (0xc000acb4a0) (0xc00053ae60) Create stream\nI0516 23:56:14.567671 1092 log.go:172] (0xc000acb4a0) (0xc00053ae60) Stream added, broadcasting: 5\nI0516 23:56:14.568863 1092 log.go:172] (0xc000acb4a0) Reply frame received for 5\nI0516 23:56:14.659879 1092 log.go:172] (0xc000acb4a0) Data frame received for 5\nI0516 23:56:14.659900 1092 log.go:172] (0xc00053ae60) (5) Data frame handling\nI0516 23:56:14.659916 1092 log.go:172] (0xc00053ae60) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0516 23:56:14.680959 1092 log.go:172] (0xc000acb4a0) Data frame received for 3\nI0516 23:56:14.680985 1092 log.go:172] (0xc000838000) (3) Data frame handling\nI0516 23:56:14.681006 1092 log.go:172] (0xc000838000) (3) Data frame sent\nI0516 23:56:14.682181 1092 log.go:172] (0xc000acb4a0) Data frame received for 5\nI0516 23:56:14.682197 1092 log.go:172] (0xc00053ae60) (5) Data frame handling\nI0516 23:56:14.682355 1092 log.go:172] (0xc000acb4a0) Data frame received for 3\nI0516 23:56:14.682368 1092 log.go:172] (0xc000838000) (3) Data frame handling\nI0516 23:56:14.684490 1092 log.go:172] (0xc000acb4a0) Data frame received for 1\nI0516 23:56:14.684517 1092 log.go:172] (0xc000aae5a0) (1) Data frame handling\nI0516 23:56:14.684533 1092 log.go:172] (0xc000aae5a0) (1) Data frame sent\nI0516 23:56:14.684552 1092 log.go:172] (0xc000acb4a0) (0xc000aae5a0) Stream removed, broadcasting: 1\nI0516 23:56:14.684582 1092 log.go:172] (0xc000acb4a0) Go away received\nI0516 23:56:14.684885 1092 log.go:172] (0xc000acb4a0) (0xc000aae5a0) Stream removed, broadcasting: 1\nI0516 23:56:14.684903 1092 log.go:172] (0xc000acb4a0) (0xc000838000) Stream removed, broadcasting: 3\nI0516 23:56:14.684912 1092 log.go:172] (0xc000acb4a0) (0xc00053ae60) Stream removed, broadcasting: 5\n" May 16 23:56:14.690: INFO: stdout: "iptables" May 16 23:56:14.690: INFO: proxyMode: iptables May 16 23:56:14.696: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 16 23:56:14.703: INFO: Pod kube-proxy-mode-detector still exists May 16 23:56:16.704: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 16 23:56:16.708: INFO: Pod kube-proxy-mode-detector still exists May 16 23:56:18.704: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 16 23:56:18.715: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6092 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6092 I0516 23:56:18.796511 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6092, replica count: 3 I0516 23:56:21.847039 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0516 23:56:24.847240 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 16 23:56:24.854: INFO: Creating new exec pod May 16 23:56:29.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6092 execpod-affinity4qrjn -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 16 23:56:30.198: INFO: stderr: "I0516 23:56:30.080215 1112 log.go:172] (0xc00063c4d0) (0xc000513c20) Create stream\nI0516 23:56:30.080277 1112 log.go:172] (0xc00063c4d0) (0xc000513c20) Stream added, broadcasting: 1\nI0516 23:56:30.082591 1112 log.go:172] (0xc00063c4d0) Reply frame received for 1\nI0516 23:56:30.082641 1112 log.go:172] (0xc00063c4d0) (0xc000476dc0) Create stream\nI0516 23:56:30.082659 1112 log.go:172] (0xc00063c4d0) (0xc000476dc0) Stream added, broadcasting: 3\nI0516 23:56:30.083602 1112 log.go:172] (0xc00063c4d0) Reply frame received for 3\nI0516 23:56:30.083650 1112 log.go:172] (0xc00063c4d0) (0xc0005c65a0) Create stream\nI0516 23:56:30.083667 1112 log.go:172] (0xc00063c4d0) (0xc0005c65a0) Stream added, broadcasting: 5\nI0516 23:56:30.084522 1112 log.go:172] (0xc00063c4d0) Reply frame received for 5\nI0516 23:56:30.180969 1112 log.go:172] (0xc00063c4d0) Data frame received for 5\nI0516 23:56:30.180997 1112 log.go:172] (0xc0005c65a0) (5) Data frame handling\nI0516 23:56:30.181014 1112 log.go:172] (0xc0005c65a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0516 23:56:30.189314 1112 log.go:172] (0xc00063c4d0) Data frame received for 5\nI0516 23:56:30.189356 1112 log.go:172] (0xc0005c65a0) (5) Data frame handling\nI0516 23:56:30.189380 1112 log.go:172] (0xc0005c65a0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0516 23:56:30.189558 1112 log.go:172] (0xc00063c4d0) Data frame received for 5\nI0516 23:56:30.189585 1112 log.go:172] (0xc0005c65a0) (5) Data frame handling\nI0516 23:56:30.189622 1112 log.go:172] (0xc00063c4d0) Data frame received for 3\nI0516 23:56:30.189660 1112 log.go:172] (0xc000476dc0) (3) Data frame handling\nI0516 23:56:30.191293 1112 log.go:172] (0xc00063c4d0) Data frame received for 1\nI0516 23:56:30.191309 1112 log.go:172] (0xc000513c20) (1) Data frame handling\nI0516 23:56:30.191320 1112 log.go:172] (0xc000513c20) (1) Data frame sent\nI0516 23:56:30.191327 1112 log.go:172] (0xc00063c4d0) (0xc000513c20) Stream removed, broadcasting: 1\nI0516 23:56:30.191573 1112 log.go:172] (0xc00063c4d0) (0xc000513c20) Stream removed, broadcasting: 1\nI0516 23:56:30.191587 1112 log.go:172] (0xc00063c4d0) (0xc000476dc0) Stream removed, broadcasting: 3\nI0516 23:56:30.191654 1112 log.go:172] (0xc00063c4d0) Go away received\nI0516 23:56:30.191714 1112 log.go:172] (0xc00063c4d0) (0xc0005c65a0) Stream removed, broadcasting: 5\n" May 16 23:56:30.198: INFO: stdout: "" May 16 23:56:30.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6092 execpod-affinity4qrjn -- /bin/sh -x -c nc -zv -t -w 2 10.102.106.209 80' May 16 23:56:30.431: INFO: stderr: "I0516 23:56:30.345662 1135 log.go:172] (0xc000533ce0) (0xc000543180) Create stream\nI0516 23:56:30.345724 1135 log.go:172] (0xc000533ce0) (0xc000543180) Stream added, broadcasting: 1\nI0516 23:56:30.348124 1135 log.go:172] (0xc000533ce0) Reply frame received for 1\nI0516 23:56:30.348189 1135 log.go:172] (0xc000533ce0) (0xc000436d20) Create stream\nI0516 23:56:30.348206 1135 log.go:172] (0xc000533ce0) (0xc000436d20) Stream added, broadcasting: 3\nI0516 23:56:30.349308 1135 log.go:172] (0xc000533ce0) Reply frame received for 3\nI0516 23:56:30.349367 1135 log.go:172] (0xc000533ce0) (0xc000668000) Create stream\nI0516 23:56:30.349389 1135 log.go:172] (0xc000533ce0) (0xc000668000) Stream added, broadcasting: 5\nI0516 23:56:30.350250 1135 log.go:172] (0xc000533ce0) Reply frame received for 5\nI0516 23:56:30.423194 1135 log.go:172] (0xc000533ce0) Data frame received for 3\nI0516 23:56:30.423253 1135 log.go:172] (0xc000436d20) (3) Data frame handling\nI0516 23:56:30.423296 1135 log.go:172] (0xc000533ce0) Data frame received for 5\nI0516 23:56:30.423334 1135 log.go:172] (0xc000668000) (5) Data frame handling\nI0516 23:56:30.423360 1135 log.go:172] (0xc000668000) (5) Data frame sent\nI0516 23:56:30.423377 1135 log.go:172] (0xc000533ce0) Data frame received for 5\nI0516 23:56:30.423400 1135 log.go:172] (0xc000668000) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.106.209 80\nConnection to 10.102.106.209 80 port [tcp/http] succeeded!\nI0516 23:56:30.424614 1135 log.go:172] (0xc000533ce0) Data frame received for 1\nI0516 23:56:30.424632 1135 log.go:172] (0xc000543180) (1) Data frame handling\nI0516 23:56:30.424647 1135 log.go:172] (0xc000543180) (1) Data frame sent\nI0516 23:56:30.424660 1135 log.go:172] (0xc000533ce0) (0xc000543180) Stream removed, broadcasting: 1\nI0516 23:56:30.424697 1135 log.go:172] (0xc000533ce0) Go away received\nI0516 23:56:30.424882 1135 log.go:172] (0xc000533ce0) (0xc000543180) Stream removed, broadcasting: 1\nI0516 23:56:30.424896 1135 log.go:172] (0xc000533ce0) (0xc000436d20) Stream removed, broadcasting: 3\nI0516 23:56:30.424902 1135 log.go:172] (0xc000533ce0) (0xc000668000) Stream removed, broadcasting: 5\n" May 16 23:56:30.431: INFO: stdout: "" May 16 23:56:30.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6092 execpod-affinity4qrjn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30538' May 16 23:56:30.655: INFO: stderr: "I0516 23:56:30.562372 1158 log.go:172] (0xc000af9340) (0xc000bda500) Create stream\nI0516 23:56:30.562461 1158 log.go:172] (0xc000af9340) (0xc000bda500) Stream added, broadcasting: 1\nI0516 23:56:30.567393 1158 log.go:172] (0xc000af9340) Reply frame received for 1\nI0516 23:56:30.567443 1158 log.go:172] (0xc000af9340) (0xc0005a6dc0) Create stream\nI0516 23:56:30.567455 1158 log.go:172] (0xc000af9340) (0xc0005a6dc0) Stream added, broadcasting: 3\nI0516 23:56:30.568361 1158 log.go:172] (0xc000af9340) Reply frame received for 3\nI0516 23:56:30.568408 1158 log.go:172] (0xc000af9340) (0xc000342140) Create stream\nI0516 23:56:30.568425 1158 log.go:172] (0xc000af9340) (0xc000342140) Stream added, broadcasting: 5\nI0516 23:56:30.569816 1158 log.go:172] (0xc000af9340) Reply frame received for 5\nI0516 23:56:30.647087 1158 log.go:172] (0xc000af9340) Data frame received for 3\nI0516 23:56:30.647131 1158 log.go:172] (0xc0005a6dc0) (3) Data frame handling\nI0516 23:56:30.647175 1158 log.go:172] (0xc000af9340) Data frame received for 5\nI0516 23:56:30.647191 1158 log.go:172] (0xc000342140) (5) Data frame handling\nI0516 23:56:30.647208 1158 log.go:172] (0xc000342140) (5) Data frame sent\nI0516 23:56:30.647223 1158 log.go:172] (0xc000af9340) Data frame received for 5\nI0516 23:56:30.647237 1158 log.go:172] (0xc000342140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30538\nConnection to 172.17.0.13 30538 port [tcp/30538] succeeded!\nI0516 23:56:30.649300 1158 log.go:172] (0xc000af9340) Data frame received for 1\nI0516 23:56:30.649332 1158 log.go:172] (0xc000bda500) (1) Data frame handling\nI0516 23:56:30.649364 1158 log.go:172] (0xc000bda500) (1) Data frame sent\nI0516 23:56:30.649387 1158 log.go:172] (0xc000af9340) (0xc000bda500) Stream removed, broadcasting: 1\nI0516 23:56:30.649412 1158 log.go:172] (0xc000af9340) Go away received\nI0516 23:56:30.649778 1158 log.go:172] (0xc000af9340) (0xc000bda500) Stream removed, broadcasting: 1\nI0516 23:56:30.649808 1158 log.go:172] (0xc000af9340) (0xc0005a6dc0) Stream removed, broadcasting: 3\nI0516 23:56:30.649823 1158 log.go:172] (0xc000af9340) (0xc000342140) Stream removed, broadcasting: 5\n" May 16 23:56:30.655: INFO: stdout: "" May 16 23:56:30.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6092 execpod-affinity4qrjn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30538' May 16 23:56:30.859: INFO: stderr: "I0516 23:56:30.783591 1178 log.go:172] (0xc000b97130) (0xc000748f00) Create stream\nI0516 23:56:30.783652 1178 log.go:172] (0xc000b97130) (0xc000748f00) Stream added, broadcasting: 1\nI0516 23:56:30.787461 1178 log.go:172] (0xc000b97130) Reply frame received for 1\nI0516 23:56:30.787511 1178 log.go:172] (0xc000b97130) (0xc000735540) Create stream\nI0516 23:56:30.787522 1178 log.go:172] (0xc000b97130) (0xc000735540) Stream added, broadcasting: 3\nI0516 23:56:30.788316 1178 log.go:172] (0xc000b97130) Reply frame received for 3\nI0516 23:56:30.788382 1178 log.go:172] (0xc000b97130) (0xc0006fcaa0) Create stream\nI0516 23:56:30.788399 1178 log.go:172] (0xc000b97130) (0xc0006fcaa0) Stream added, broadcasting: 5\nI0516 23:56:30.789295 1178 log.go:172] (0xc000b97130) Reply frame received for 5\nI0516 23:56:30.851739 1178 log.go:172] (0xc000b97130) Data frame received for 5\nI0516 23:56:30.851863 1178 log.go:172] (0xc0006fcaa0) (5) Data frame handling\nI0516 23:56:30.851899 1178 log.go:172] (0xc0006fcaa0) (5) Data frame sent\nI0516 23:56:30.851915 1178 log.go:172] (0xc000b97130) Data frame received for 5\nI0516 23:56:30.851925 1178 log.go:172] (0xc0006fcaa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30538\nConnection to 172.17.0.12 30538 port [tcp/30538] succeeded!\nI0516 23:56:30.851953 1178 log.go:172] (0xc0006fcaa0) (5) Data frame sent\nI0516 23:56:30.852425 1178 log.go:172] (0xc000b97130) Data frame received for 5\nI0516 23:56:30.852444 1178 log.go:172] (0xc0006fcaa0) (5) Data frame handling\nI0516 23:56:30.852658 1178 log.go:172] (0xc000b97130) Data frame received for 3\nI0516 23:56:30.852679 1178 log.go:172] (0xc000735540) (3) Data frame handling\nI0516 23:56:30.854477 1178 log.go:172] (0xc000b97130) Data frame received for 1\nI0516 23:56:30.854501 1178 log.go:172] (0xc000748f00) (1) Data frame handling\nI0516 23:56:30.854521 1178 log.go:172] (0xc000748f00) (1) Data frame sent\nI0516 23:56:30.854685 1178 log.go:172] (0xc000b97130) (0xc000748f00) Stream removed, broadcasting: 1\nI0516 23:56:30.854782 1178 log.go:172] (0xc000b97130) Go away received\nI0516 23:56:30.854975 1178 log.go:172] (0xc000b97130) (0xc000748f00) Stream removed, broadcasting: 1\nI0516 23:56:30.854990 1178 log.go:172] (0xc000b97130) (0xc000735540) Stream removed, broadcasting: 3\nI0516 23:56:30.855000 1178 log.go:172] (0xc000b97130) (0xc0006fcaa0) Stream removed, broadcasting: 5\n" May 16 23:56:30.859: INFO: stdout: "" May 16 23:56:30.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6092 execpod-affinity4qrjn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30538/ ; done' May 16 23:56:31.155: INFO: stderr: "I0516 23:56:30.998668 1199 log.go:172] (0xc0006d0790) (0xc00021ed20) Create stream\nI0516 23:56:30.998725 1199 log.go:172] (0xc0006d0790) (0xc00021ed20) Stream added, broadcasting: 1\nI0516 23:56:31.002239 1199 log.go:172] (0xc0006d0790) Reply frame received for 1\nI0516 23:56:31.002299 1199 log.go:172] (0xc0006d0790) (0xc00009a000) Create stream\nI0516 23:56:31.002317 1199 log.go:172] (0xc0006d0790) (0xc00009a000) Stream added, broadcasting: 3\nI0516 23:56:31.003652 1199 log.go:172] (0xc0006d0790) Reply frame received for 3\nI0516 23:56:31.003697 1199 log.go:172] (0xc0006d0790) (0xc0006bd0e0) Create stream\nI0516 23:56:31.003715 1199 log.go:172] (0xc0006d0790) (0xc0006bd0e0) Stream added, broadcasting: 5\nI0516 23:56:31.004817 1199 log.go:172] (0xc0006d0790) Reply frame received for 5\nI0516 23:56:31.066763 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.066798 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.066811 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.066834 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.066844 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.066853 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.072806 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.072825 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.072847 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.073722 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.073759 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.073779 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.073805 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.073821 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.073840 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.078869 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.078904 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.078925 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.079322 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.079359 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.079374 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.079397 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.079413 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.079451 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.082405 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.082430 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.082449 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.082775 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.082803 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.082814 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.082829 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.082836 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.082845 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.086036 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.086057 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.086073 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.086465 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.086486 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.086492 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.086501 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.086506 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.086513 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0516 23:56:31.086520 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.086526 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.086536 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0516 23:56:31.092671 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.092689 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.092707 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.093599 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.093611 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.093617 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.093638 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.093656 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.093677 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.098549 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.098561 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.098567 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.099225 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.099235 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.099240 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.099266 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.099288 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.099310 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0516 23:56:31.099324 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.099335 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.099360 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0516 23:56:31.105869 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.105892 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.105916 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.106643 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.106655 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.106660 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0516 23:56:31.106665 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.106670 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.106695 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.106750 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.106769 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.106794 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0516 23:56:31.110118 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.110130 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.110138 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.110605 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.110616 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.110622 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.110661 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.110685 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.110701 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.115344 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.115361 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.115369 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.115865 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.115915 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.115937 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.115967 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.115980 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.116001 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.119852 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.119866 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.119875 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.120279 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.120291 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.120299 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.120309 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.120314 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.120318 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0516 23:56:31.120322 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.120326 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.120337 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\nI0516 23:56:31.125871 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.125888 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.125906 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.126101 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.126133 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.126149 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.126170 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.126182 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.126200 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.130798 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.130814 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.130829 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.131344 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.131357 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.131377 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.131393 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.131404 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.131422 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.134952 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.134981 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.135000 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.135230 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.135245 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.135259 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.135297 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.135309 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.135316 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.139662 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.139697 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.139725 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.140042 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.140070 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.140089 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.140108 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.140121 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.140130 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.143431 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.143450 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.143470 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.144306 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.144324 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.144342 1199 log.go:172] (0xc0006bd0e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.144366 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.144401 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.144433 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.148770 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.148806 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.148827 1199 log.go:172] (0xc00009a000) (3) Data frame sent\nI0516 23:56:31.149973 1199 log.go:172] (0xc0006d0790) Data frame received for 5\nI0516 23:56:31.150008 1199 log.go:172] (0xc0006bd0e0) (5) Data frame handling\nI0516 23:56:31.150029 1199 log.go:172] (0xc0006d0790) Data frame received for 3\nI0516 23:56:31.150043 1199 log.go:172] (0xc00009a000) (3) Data frame handling\nI0516 23:56:31.151612 1199 log.go:172] (0xc0006d0790) Data frame received for 1\nI0516 23:56:31.151629 1199 log.go:172] (0xc00021ed20) (1) Data frame handling\nI0516 23:56:31.151649 1199 log.go:172] (0xc00021ed20) (1) Data frame sent\nI0516 23:56:31.151662 1199 log.go:172] (0xc0006d0790) (0xc00021ed20) Stream removed, broadcasting: 1\nI0516 23:56:31.151702 1199 log.go:172] (0xc0006d0790) Go away received\nI0516 23:56:31.151939 1199 log.go:172] (0xc0006d0790) (0xc00021ed20) Stream removed, broadcasting: 1\nI0516 23:56:31.151954 1199 log.go:172] (0xc0006d0790) (0xc00009a000) Stream removed, broadcasting: 3\nI0516 23:56:31.151960 1199 log.go:172] (0xc0006d0790) (0xc0006bd0e0) Stream removed, broadcasting: 5\n" May 16 23:56:31.156: INFO: stdout: "\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb\naffinity-nodeport-timeout-jq7xb" May 16 23:56:31.156: INFO: Received response from host: May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Received response from host: affinity-nodeport-timeout-jq7xb May 16 23:56:31.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6092 execpod-affinity4qrjn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30538/' May 16 23:56:31.372: INFO: stderr: "I0516 23:56:31.289677 1220 log.go:172] (0xc000ae2d10) (0xc00053a140) Create stream\nI0516 23:56:31.289724 1220 log.go:172] (0xc000ae2d10) (0xc00053a140) Stream added, broadcasting: 1\nI0516 23:56:31.292048 1220 log.go:172] (0xc000ae2d10) Reply frame received for 1\nI0516 23:56:31.292087 1220 log.go:172] (0xc000ae2d10) (0xc00085bcc0) Create stream\nI0516 23:56:31.292103 1220 log.go:172] (0xc000ae2d10) (0xc00085bcc0) Stream added, broadcasting: 3\nI0516 23:56:31.292904 1220 log.go:172] (0xc000ae2d10) Reply frame received for 3\nI0516 23:56:31.292955 1220 log.go:172] (0xc000ae2d10) (0xc000574460) Create stream\nI0516 23:56:31.292975 1220 log.go:172] (0xc000ae2d10) (0xc000574460) Stream added, broadcasting: 5\nI0516 23:56:31.294038 1220 log.go:172] (0xc000ae2d10) Reply frame received for 5\nI0516 23:56:31.360319 1220 log.go:172] (0xc000ae2d10) Data frame received for 5\nI0516 23:56:31.360371 1220 log.go:172] (0xc000574460) (5) Data frame handling\nI0516 23:56:31.360402 1220 log.go:172] (0xc000574460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:31.363873 1220 log.go:172] (0xc000ae2d10) Data frame received for 3\nI0516 23:56:31.363909 1220 log.go:172] (0xc00085bcc0) (3) Data frame handling\nI0516 23:56:31.363940 1220 log.go:172] (0xc00085bcc0) (3) Data frame sent\nI0516 23:56:31.364960 1220 log.go:172] (0xc000ae2d10) Data frame received for 3\nI0516 23:56:31.364987 1220 log.go:172] (0xc00085bcc0) (3) Data frame handling\nI0516 23:56:31.365337 1220 log.go:172] (0xc000ae2d10) Data frame received for 5\nI0516 23:56:31.365366 1220 log.go:172] (0xc000574460) (5) Data frame handling\nI0516 23:56:31.366994 1220 log.go:172] (0xc000ae2d10) Data frame received for 1\nI0516 23:56:31.367012 1220 log.go:172] (0xc00053a140) (1) Data frame handling\nI0516 23:56:31.367031 1220 log.go:172] (0xc00053a140) (1) Data frame sent\nI0516 23:56:31.367063 1220 log.go:172] (0xc000ae2d10) (0xc00053a140) Stream removed, broadcasting: 1\nI0516 23:56:31.367088 1220 log.go:172] (0xc000ae2d10) Go away received\nI0516 23:56:31.367492 1220 log.go:172] (0xc000ae2d10) (0xc00053a140) Stream removed, broadcasting: 1\nI0516 23:56:31.367517 1220 log.go:172] (0xc000ae2d10) (0xc00085bcc0) Stream removed, broadcasting: 3\nI0516 23:56:31.367532 1220 log.go:172] (0xc000ae2d10) (0xc000574460) Stream removed, broadcasting: 5\n" May 16 23:56:31.372: INFO: stdout: "affinity-nodeport-timeout-jq7xb" May 16 23:56:46.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6092 execpod-affinity4qrjn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:30538/' May 16 23:56:46.623: INFO: stderr: "I0516 23:56:46.516968 1239 log.go:172] (0xc0004189a0) (0xc000562960) Create stream\nI0516 23:56:46.517024 1239 log.go:172] (0xc0004189a0) (0xc000562960) Stream added, broadcasting: 1\nI0516 23:56:46.519751 1239 log.go:172] (0xc0004189a0) Reply frame received for 1\nI0516 23:56:46.519806 1239 log.go:172] (0xc0004189a0) (0xc000548000) Create stream\nI0516 23:56:46.519820 1239 log.go:172] (0xc0004189a0) (0xc000548000) Stream added, broadcasting: 3\nI0516 23:56:46.532522 1239 log.go:172] (0xc0004189a0) Reply frame received for 3\nI0516 23:56:46.532568 1239 log.go:172] (0xc0004189a0) (0xc00053e960) Create stream\nI0516 23:56:46.532579 1239 log.go:172] (0xc0004189a0) (0xc00053e960) Stream added, broadcasting: 5\nI0516 23:56:46.534573 1239 log.go:172] (0xc0004189a0) Reply frame received for 5\nI0516 23:56:46.610257 1239 log.go:172] (0xc0004189a0) Data frame received for 5\nI0516 23:56:46.610301 1239 log.go:172] (0xc00053e960) (5) Data frame handling\nI0516 23:56:46.610321 1239 log.go:172] (0xc00053e960) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30538/\nI0516 23:56:46.615015 1239 log.go:172] (0xc0004189a0) Data frame received for 3\nI0516 23:56:46.615043 1239 log.go:172] (0xc000548000) (3) Data frame handling\nI0516 23:56:46.615061 1239 log.go:172] (0xc000548000) (3) Data frame sent\nI0516 23:56:46.615888 1239 log.go:172] (0xc0004189a0) Data frame received for 3\nI0516 23:56:46.615984 1239 log.go:172] (0xc000548000) (3) Data frame handling\nI0516 23:56:46.616020 1239 log.go:172] (0xc0004189a0) Data frame received for 5\nI0516 23:56:46.616042 1239 log.go:172] (0xc00053e960) (5) Data frame handling\nI0516 23:56:46.617777 1239 log.go:172] (0xc0004189a0) Data frame received for 1\nI0516 23:56:46.617818 1239 log.go:172] (0xc000562960) (1) Data frame handling\nI0516 23:56:46.617850 1239 log.go:172] (0xc000562960) (1) Data frame sent\nI0516 23:56:46.617870 1239 log.go:172] (0xc0004189a0) (0xc000562960) Stream removed, broadcasting: 1\nI0516 23:56:46.617894 1239 log.go:172] (0xc0004189a0) Go away received\nI0516 23:56:46.618306 1239 log.go:172] (0xc0004189a0) (0xc000562960) Stream removed, broadcasting: 1\nI0516 23:56:46.618325 1239 log.go:172] (0xc0004189a0) (0xc000548000) Stream removed, broadcasting: 3\nI0516 23:56:46.618336 1239 log.go:172] (0xc0004189a0) (0xc00053e960) Stream removed, broadcasting: 5\n" May 16 23:56:46.623: INFO: stdout: "affinity-nodeport-timeout-8s7jb" May 16 23:56:46.623: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6092, will wait for the garbage collector to delete the pods May 16 23:56:46.739: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.869898ms May 16 23:56:47.139: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.220793ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:56:55.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6092" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:44.695 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":31,"skipped":527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:56:55.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-35f3db2f-9058-4d2e-ba7c-5ed748219440 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:56:55.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6574" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":32,"skipped":550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:56:55.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 23:56:55.826: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 23:56:57.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270215, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270215, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270216, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270215, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 23:57:00.920: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 16 23:57:00.940: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:57:00.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4747" for this suite. STEP: Destroying namespace "webhook-4747-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.961 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":33,"skipped":579,"failed":0} [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:57:01.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-b7c70ea4-a507-45b0-8374-7b2f6224cfb0 STEP: Creating secret with name s-test-opt-upd-689fda5b-943c-431e-8b27-3374a3979d90 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b7c70ea4-a507-45b0-8374-7b2f6224cfb0 STEP: Updating secret s-test-opt-upd-689fda5b-943c-431e-8b27-3374a3979d90 STEP: Creating secret with name s-test-opt-create-7e3b1759-00b7-4f05-88ba-79c653dd7e0c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:57:11.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8072" for this suite. • [SLOW TEST:10.235 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":579,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:57:11.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 16 23:57:19.530: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:19.565: INFO: Pod pod-with-prestop-http-hook still exists May 16 23:57:21.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:21.597: INFO: Pod pod-with-prestop-http-hook still exists May 16 23:57:23.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:23.570: INFO: Pod pod-with-prestop-http-hook still exists May 16 23:57:25.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:25.570: INFO: Pod pod-with-prestop-http-hook still exists May 16 23:57:27.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:27.569: INFO: Pod pod-with-prestop-http-hook still exists May 16 23:57:29.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:29.587: INFO: Pod pod-with-prestop-http-hook still exists May 16 23:57:31.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:31.570: INFO: Pod pod-with-prestop-http-hook still exists May 16 23:57:33.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:33.570: INFO: Pod pod-with-prestop-http-hook still exists May 16 23:57:35.566: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 16 23:57:35.570: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:57:35.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8871" for this suite. • [SLOW TEST:24.302 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":585,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:57:35.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 23:57:36.256: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 23:57:38.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270256, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270256, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270256, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270256, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 23:57:41.305: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 23:57:41.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8009-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:57:42.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1014" for this suite. STEP: Destroying namespace "webhook-1014-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.183 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":36,"skipped":592,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:57:42.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 16 23:57:42.908: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6775 /api/v1/namespaces/watch-6775/configmaps/e2e-watch-test-resource-version 065fb2c4-4744-40eb-b670-320fe77c8b93 5276500 0 2020-05-16 23:57:42 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-16 23:57:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 16 23:57:42.908: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6775 /api/v1/namespaces/watch-6775/configmaps/e2e-watch-test-resource-version 065fb2c4-4744-40eb-b670-320fe77c8b93 5276501 0 2020-05-16 23:57:42 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-16 23:57:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:57:42.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6775" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":37,"skipped":596,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:57:42.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 16 23:57:43.034: INFO: Waiting up to 5m0s for pod "pod-a3ea8c98-05fb-4643-a215-242252eb87cc" in namespace "emptydir-481" to be "Succeeded or Failed" May 16 23:57:43.064: INFO: Pod "pod-a3ea8c98-05fb-4643-a215-242252eb87cc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.560143ms May 16 23:57:45.068: INFO: Pod "pod-a3ea8c98-05fb-4643-a215-242252eb87cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034545849s May 16 23:57:47.072: INFO: Pod "pod-a3ea8c98-05fb-4643-a215-242252eb87cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038053868s STEP: Saw pod success May 16 23:57:47.072: INFO: Pod "pod-a3ea8c98-05fb-4643-a215-242252eb87cc" satisfied condition "Succeeded or Failed" May 16 23:57:47.074: INFO: Trying to get logs from node latest-worker2 pod pod-a3ea8c98-05fb-4643-a215-242252eb87cc container test-container: STEP: delete the pod May 16 23:57:47.115: INFO: Waiting for pod pod-a3ea8c98-05fb-4643-a215-242252eb87cc to disappear May 16 23:57:47.120: INFO: Pod pod-a3ea8c98-05fb-4643-a215-242252eb87cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:57:47.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-481" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":38,"skipped":602,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:57:47.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 16 23:57:47.200: INFO: Waiting up to 5m0s for pod "client-containers-6ee875c1-f349-470e-a35f-c0e94ad36312" in namespace "containers-7189" to be "Succeeded or Failed" May 16 23:57:47.216: INFO: Pod "client-containers-6ee875c1-f349-470e-a35f-c0e94ad36312": Phase="Pending", Reason="", readiness=false. Elapsed: 15.293811ms May 16 23:57:49.220: INFO: Pod "client-containers-6ee875c1-f349-470e-a35f-c0e94ad36312": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019215753s May 16 23:57:51.223: INFO: Pod "client-containers-6ee875c1-f349-470e-a35f-c0e94ad36312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022838016s STEP: Saw pod success May 16 23:57:51.223: INFO: Pod "client-containers-6ee875c1-f349-470e-a35f-c0e94ad36312" satisfied condition "Succeeded or Failed" May 16 23:57:51.226: INFO: Trying to get logs from node latest-worker2 pod client-containers-6ee875c1-f349-470e-a35f-c0e94ad36312 container test-container: STEP: delete the pod May 16 23:57:51.248: INFO: Waiting for pod client-containers-6ee875c1-f349-470e-a35f-c0e94ad36312 to disappear May 16 23:57:51.260: INFO: Pod client-containers-6ee875c1-f349-470e-a35f-c0e94ad36312 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:57:51.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7189" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:57:51.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 16 23:57:51.895: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 16 23:57:53.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270271, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270271, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270271, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270271, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 16 23:57:57.010: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:57:57.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4250" for this suite. STEP: Destroying namespace "webhook-4250-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.956 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":40,"skipped":655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:57:57.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 16 23:57:57.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7589' May 16 23:57:57.834: INFO: stderr: "" May 16 23:57:57.834: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 23:57:57.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7589' May 16 23:57:58.000: INFO: stderr: "" May 16 23:57:58.000: INFO: stdout: "update-demo-nautilus-6f5jk update-demo-nautilus-6ghbx " May 16 23:57:58.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:57:58.242: INFO: stderr: "" May 16 23:57:58.242: INFO: stdout: "" May 16 23:57:58.242: INFO: update-demo-nautilus-6f5jk is created but not running May 16 23:58:03.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7589' May 16 23:58:03.348: INFO: stderr: "" May 16 23:58:03.348: INFO: stdout: "update-demo-nautilus-6f5jk update-demo-nautilus-6ghbx " May 16 23:58:03.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:03.444: INFO: stderr: "" May 16 23:58:03.444: INFO: stdout: "true" May 16 23:58:03.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:03.545: INFO: stderr: "" May 16 23:58:03.545: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 23:58:03.545: INFO: validating pod update-demo-nautilus-6f5jk May 16 23:58:03.554: INFO: got data: { "image": "nautilus.jpg" } May 16 23:58:03.554: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 23:58:03.554: INFO: update-demo-nautilus-6f5jk is verified up and running May 16 23:58:03.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6ghbx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:03.661: INFO: stderr: "" May 16 23:58:03.661: INFO: stdout: "true" May 16 23:58:03.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6ghbx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:03.748: INFO: stderr: "" May 16 23:58:03.748: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 23:58:03.748: INFO: validating pod update-demo-nautilus-6ghbx May 16 23:58:03.760: INFO: got data: { "image": "nautilus.jpg" } May 16 23:58:03.760: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 23:58:03.760: INFO: update-demo-nautilus-6ghbx is verified up and running STEP: scaling down the replication controller May 16 23:58:03.763: INFO: scanned /root for discovery docs: May 16 23:58:03.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7589' May 16 23:58:04.904: INFO: stderr: "" May 16 23:58:04.904: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 23:58:04.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7589' May 16 23:58:05.000: INFO: stderr: "" May 16 23:58:05.000: INFO: stdout: "update-demo-nautilus-6f5jk update-demo-nautilus-6ghbx " STEP: Replicas for name=update-demo: expected=1 actual=2 May 16 23:58:10.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7589' May 16 23:58:10.106: INFO: stderr: "" May 16 23:58:10.106: INFO: stdout: "update-demo-nautilus-6f5jk " May 16 23:58:10.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:10.209: INFO: stderr: "" May 16 23:58:10.209: INFO: stdout: "true" May 16 23:58:10.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:10.322: INFO: stderr: "" May 16 23:58:10.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 23:58:10.322: INFO: validating pod update-demo-nautilus-6f5jk May 16 23:58:10.326: INFO: got data: { "image": "nautilus.jpg" } May 16 23:58:10.326: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 23:58:10.326: INFO: update-demo-nautilus-6f5jk is verified up and running STEP: scaling up the replication controller May 16 23:58:10.328: INFO: scanned /root for discovery docs: May 16 23:58:10.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7589' May 16 23:58:11.463: INFO: stderr: "" May 16 23:58:11.463: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 23:58:11.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7589' May 16 23:58:11.568: INFO: stderr: "" May 16 23:58:11.568: INFO: stdout: "update-demo-nautilus-6f5jk update-demo-nautilus-vk7dk " May 16 23:58:11.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:11.669: INFO: stderr: "" May 16 23:58:11.669: INFO: stdout: "true" May 16 23:58:11.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:11.786: INFO: stderr: "" May 16 23:58:11.786: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 23:58:11.786: INFO: validating pod update-demo-nautilus-6f5jk May 16 23:58:11.789: INFO: got data: { "image": "nautilus.jpg" } May 16 23:58:11.789: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 23:58:11.789: INFO: update-demo-nautilus-6f5jk is verified up and running May 16 23:58:11.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vk7dk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:11.878: INFO: stderr: "" May 16 23:58:11.878: INFO: stdout: "" May 16 23:58:11.878: INFO: update-demo-nautilus-vk7dk is created but not running May 16 23:58:16.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7589' May 16 23:58:16.978: INFO: stderr: "" May 16 23:58:16.978: INFO: stdout: "update-demo-nautilus-6f5jk update-demo-nautilus-vk7dk " May 16 23:58:16.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:17.101: INFO: stderr: "" May 16 23:58:17.101: INFO: stdout: "true" May 16 23:58:17.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6f5jk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:17.197: INFO: stderr: "" May 16 23:58:17.197: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 23:58:17.197: INFO: validating pod update-demo-nautilus-6f5jk May 16 23:58:17.200: INFO: got data: { "image": "nautilus.jpg" } May 16 23:58:17.201: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 23:58:17.201: INFO: update-demo-nautilus-6f5jk is verified up and running May 16 23:58:17.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vk7dk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:17.297: INFO: stderr: "" May 16 23:58:17.297: INFO: stdout: "true" May 16 23:58:17.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vk7dk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7589' May 16 23:58:17.426: INFO: stderr: "" May 16 23:58:17.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 23:58:17.426: INFO: validating pod update-demo-nautilus-vk7dk May 16 23:58:17.429: INFO: got data: { "image": "nautilus.jpg" } May 16 23:58:17.429: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 23:58:17.429: INFO: update-demo-nautilus-vk7dk is verified up and running STEP: using delete to clean up resources May 16 23:58:17.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7589' May 16 23:58:17.527: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 23:58:17.527: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 16 23:58:17.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7589' May 16 23:58:17.634: INFO: stderr: "No resources found in kubectl-7589 namespace.\n" May 16 23:58:17.634: INFO: stdout: "" May 16 23:58:17.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7589 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 23:58:17.745: INFO: stderr: "" May 16 23:58:17.745: INFO: stdout: "update-demo-nautilus-6f5jk\nupdate-demo-nautilus-vk7dk\n" May 16 23:58:18.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7589' May 16 23:58:18.369: INFO: stderr: "No resources found in kubectl-7589 namespace.\n" May 16 23:58:18.369: INFO: stdout: "" May 16 23:58:18.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7589 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 23:58:18.472: INFO: stderr: "" May 16 23:58:18.472: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:58:18.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7589" for this suite. • [SLOW TEST:21.255 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":41,"skipped":697,"failed":0} SS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:58:18.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:58:18.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1372" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":42,"skipped":699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:58:18.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 16 23:58:18.903: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:58:26.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3486" for this suite. • [SLOW TEST:8.144 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":43,"skipped":741,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:58:26.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 16 23:58:27.560: INFO: created pod pod-service-account-defaultsa May 16 23:58:27.560: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 16 23:58:27.566: INFO: created pod pod-service-account-mountsa May 16 23:58:27.566: INFO: pod pod-service-account-mountsa service account token volume mount: true May 16 23:58:27.588: INFO: created pod pod-service-account-nomountsa May 16 23:58:27.588: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 16 23:58:27.623: INFO: created pod pod-service-account-defaultsa-mountspec May 16 23:58:27.623: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 16 23:58:27.671: INFO: created pod pod-service-account-mountsa-mountspec May 16 23:58:27.671: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 16 23:58:27.706: INFO: created pod pod-service-account-nomountsa-mountspec May 16 23:58:27.706: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 16 23:58:27.751: INFO: created pod pod-service-account-defaultsa-nomountspec May 16 23:58:27.751: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 16 23:58:27.867: INFO: created pod pod-service-account-mountsa-nomountspec May 16 23:58:27.867: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 16 23:58:27.878: INFO: created pod pod-service-account-nomountsa-nomountspec May 16 23:58:27.878: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:58:27.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1201" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":44,"skipped":756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:58:28.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 16 23:58:28.135: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 16 23:58:33.221: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 16 23:58:43.228: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 16 23:58:45.238: INFO: Creating deployment "test-rollover-deployment" May 16 23:58:45.262: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 16 23:58:47.297: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 16 23:58:47.304: INFO: Ensure that both replica sets have 1 created replica May 16 23:58:47.311: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 16 23:58:47.318: INFO: Updating deployment test-rollover-deployment May 16 23:58:47.318: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 16 23:58:49.347: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 16 23:58:49.352: INFO: Make sure deployment "test-rollover-deployment" is complete May 16 23:58:49.357: INFO: all replica sets need to contain the pod-template-hash label May 16 23:58:49.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270327, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 23:58:51.367: INFO: all replica sets need to contain the pod-template-hash label May 16 23:58:51.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270331, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 23:58:53.367: INFO: all replica sets need to contain the pod-template-hash label May 16 23:58:53.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270331, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 23:58:55.366: INFO: all replica sets need to contain the pod-template-hash label May 16 23:58:55.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270331, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 23:58:57.365: INFO: all replica sets need to contain the pod-template-hash label May 16 23:58:57.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270331, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 23:58:59.366: INFO: all replica sets need to contain the pod-template-hash label May 16 23:58:59.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270331, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270325, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 16 23:59:01.364: INFO: May 16 23:59:01.364: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 16 23:59:01.372: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4185 /apis/apps/v1/namespaces/deployment-4185/deployments/test-rollover-deployment 1083d7a4-81b5-4207-8727-cd5001a9baaf 5277164 2 2020-05-16 23:58:45 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-16 23:58:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-16 23:59:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00033ff78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-16 23:58:45 +0000 UTC,LastTransitionTime:2020-05-16 23:58:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-16 23:59:01 +0000 UTC,LastTransitionTime:2020-05-16 23:58:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 16 23:59:01.375: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-4185 /apis/apps/v1/namespaces/deployment-4185/replicasets/test-rollover-deployment-7c4fd9c879 4df20179-df9c-48da-a5ef-933a4f6f4027 5277153 2 2020-05-16 23:58:47 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 1083d7a4-81b5-4207-8727-cd5001a9baaf 0xc000e46dd7 0xc000e46dd8}] [] [{kube-controller-manager Update apps/v1 2020-05-16 23:59:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1083d7a4-81b5-4207-8727-cd5001a9baaf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000e46ef8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 16 23:59:01.376: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 16 23:59:01.376: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4185 /apis/apps/v1/namespaces/deployment-4185/replicasets/test-rollover-controller ed70a980-a6f6-4b02-88e7-8de8fc83b4e1 5277163 2 2020-05-16 23:58:28 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 1083d7a4-81b5-4207-8727-cd5001a9baaf 0xc000e468f7 0xc000e468f8}] [] [{e2e.test Update apps/v1 2020-05-16 23:58:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-16 23:59:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1083d7a4-81b5-4207-8727-cd5001a9baaf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000e46a48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 23:59:01.376: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-4185 /apis/apps/v1/namespaces/deployment-4185/replicasets/test-rollover-deployment-5686c4cfd5 1488adde-7b8e-4c11-82b4-f520c36e288a 5277101 2 2020-05-16 23:58:45 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 1083d7a4-81b5-4207-8727-cd5001a9baaf 0xc000e46b67 0xc000e46b68}] [] [{kube-controller-manager Update apps/v1 2020-05-16 23:58:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1083d7a4-81b5-4207-8727-cd5001a9baaf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000e46d68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 16 23:59:01.379: INFO: Pod "test-rollover-deployment-7c4fd9c879-sn74t" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-sn74t test-rollover-deployment-7c4fd9c879- deployment-4185 /api/v1/namespaces/deployment-4185/pods/test-rollover-deployment-7c4fd9c879-sn74t 6986f5e0-c4c2-4315-9fb7-c3084e14d59c 5277121 0 2020-05-16 23:58:47 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 4df20179-df9c-48da-a5ef-933a4f6f4027 0xc00084f697 0xc00084f698}] [] [{kube-controller-manager Update v1 2020-05-16 23:58:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4df20179-df9c-48da-a5ef-933a4f6f4027\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-16 23:58:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.124\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qt82m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qt82m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qt82m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 23:58:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 23:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 23:58:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-16 23:58:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.124,StartTime:2020-05-16 23:58:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-16 23:58:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://57a91640da1308a97b9eeddf53f724a36752f23f1eca8cf02403beb6c25932e3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:59:01.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4185" for this suite. • [SLOW TEST:33.374 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":45,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:59:01.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 16 23:59:01.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9756' May 16 23:59:02.689: INFO: stderr: "" May 16 23:59:02.689: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 16 23:59:02.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9756' May 16 23:59:02.858: INFO: stderr: "" May 16 23:59:02.858: INFO: stdout: "update-demo-nautilus-ldm4c update-demo-nautilus-r6zkm " May 16 23:59:02.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ldm4c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9756' May 16 23:59:02.976: INFO: stderr: "" May 16 23:59:02.976: INFO: stdout: "" May 16 23:59:02.976: INFO: update-demo-nautilus-ldm4c is created but not running May 16 23:59:07.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9756' May 16 23:59:08.069: INFO: stderr: "" May 16 23:59:08.069: INFO: stdout: "update-demo-nautilus-ldm4c update-demo-nautilus-r6zkm " May 16 23:59:08.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ldm4c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9756' May 16 23:59:08.177: INFO: stderr: "" May 16 23:59:08.177: INFO: stdout: "true" May 16 23:59:08.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ldm4c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9756' May 16 23:59:08.282: INFO: stderr: "" May 16 23:59:08.282: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 23:59:08.282: INFO: validating pod update-demo-nautilus-ldm4c May 16 23:59:08.286: INFO: got data: { "image": "nautilus.jpg" } May 16 23:59:08.286: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 23:59:08.286: INFO: update-demo-nautilus-ldm4c is verified up and running May 16 23:59:08.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6zkm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9756' May 16 23:59:08.395: INFO: stderr: "" May 16 23:59:08.395: INFO: stdout: "true" May 16 23:59:08.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6zkm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9756' May 16 23:59:08.505: INFO: stderr: "" May 16 23:59:08.505: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 16 23:59:08.505: INFO: validating pod update-demo-nautilus-r6zkm May 16 23:59:08.509: INFO: got data: { "image": "nautilus.jpg" } May 16 23:59:08.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 16 23:59:08.509: INFO: update-demo-nautilus-r6zkm is verified up and running STEP: using delete to clean up resources May 16 23:59:08.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9756' May 16 23:59:08.620: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 16 23:59:08.620: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 16 23:59:08.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9756' May 16 23:59:08.726: INFO: stderr: "No resources found in kubectl-9756 namespace.\n" May 16 23:59:08.726: INFO: stdout: "" May 16 23:59:08.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9756 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 16 23:59:08.832: INFO: stderr: "" May 16 23:59:08.832: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:59:08.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9756" for this suite. • [SLOW TEST:7.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":46,"skipped":839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:59:08.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 16 23:59:09.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-034ac9dc-16a9-446a-a064-6765b9ea42ee" in namespace "projected-4308" to be "Succeeded or Failed" May 16 23:59:09.035: INFO: Pod "downwardapi-volume-034ac9dc-16a9-446a-a064-6765b9ea42ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.603276ms May 16 23:59:11.094: INFO: Pod "downwardapi-volume-034ac9dc-16a9-446a-a064-6765b9ea42ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067605787s May 16 23:59:13.099: INFO: Pod "downwardapi-volume-034ac9dc-16a9-446a-a064-6765b9ea42ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072065654s STEP: Saw pod success May 16 23:59:13.099: INFO: Pod "downwardapi-volume-034ac9dc-16a9-446a-a064-6765b9ea42ee" satisfied condition "Succeeded or Failed" May 16 23:59:13.102: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-034ac9dc-16a9-446a-a064-6765b9ea42ee container client-container: STEP: delete the pod May 16 23:59:13.222: INFO: Waiting for pod downwardapi-volume-034ac9dc-16a9-446a-a064-6765b9ea42ee to disappear May 16 23:59:13.251: INFO: Pod downwardapi-volume-034ac9dc-16a9-446a-a064-6765b9ea42ee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:59:13.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4308" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":47,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:59:13.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 16 23:59:17.427: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7320 PodName:var-expansion-529b2842-3b01-4e93-8464-81fd6527789b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 23:59:17.427: INFO: >>> kubeConfig: /root/.kube/config I0516 23:59:17.461663 7 log.go:172] (0xc002cacfd0) (0xc00260bae0) Create stream I0516 23:59:17.461693 7 log.go:172] (0xc002cacfd0) (0xc00260bae0) Stream added, broadcasting: 1 I0516 23:59:17.463328 7 log.go:172] (0xc002cacfd0) Reply frame received for 1 I0516 23:59:17.463375 7 log.go:172] (0xc002cacfd0) (0xc002049040) Create stream I0516 23:59:17.463387 7 log.go:172] (0xc002cacfd0) (0xc002049040) Stream added, broadcasting: 3 I0516 23:59:17.464204 7 log.go:172] (0xc002cacfd0) Reply frame received for 3 I0516 23:59:17.464241 7 log.go:172] (0xc002cacfd0) (0xc00260bb80) Create stream I0516 23:59:17.464262 7 log.go:172] (0xc002cacfd0) (0xc00260bb80) Stream added, broadcasting: 5 I0516 23:59:17.465074 7 log.go:172] (0xc002cacfd0) Reply frame received for 5 I0516 23:59:17.560823 7 log.go:172] (0xc002cacfd0) Data frame received for 5 I0516 23:59:17.560858 7 log.go:172] (0xc00260bb80) (5) Data frame handling I0516 23:59:17.560878 7 log.go:172] (0xc002cacfd0) Data frame received for 3 I0516 23:59:17.560889 7 log.go:172] (0xc002049040) (3) Data frame handling I0516 23:59:17.562203 7 log.go:172] (0xc002cacfd0) Data frame received for 1 I0516 23:59:17.562222 7 log.go:172] (0xc00260bae0) (1) Data frame handling I0516 23:59:17.562235 7 log.go:172] (0xc00260bae0) (1) Data frame sent I0516 23:59:17.562271 7 log.go:172] (0xc002cacfd0) (0xc00260bae0) Stream removed, broadcasting: 1 I0516 23:59:17.562293 7 log.go:172] (0xc002cacfd0) Go away received I0516 23:59:17.562407 7 log.go:172] (0xc002cacfd0) (0xc00260bae0) Stream removed, broadcasting: 1 I0516 23:59:17.562439 7 log.go:172] (0xc002cacfd0) (0xc002049040) Stream removed, broadcasting: 3 I0516 23:59:17.562462 7 log.go:172] (0xc002cacfd0) (0xc00260bb80) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 16 23:59:17.565: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7320 PodName:var-expansion-529b2842-3b01-4e93-8464-81fd6527789b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 16 23:59:17.565: INFO: >>> kubeConfig: /root/.kube/config I0516 23:59:17.590815 7 log.go:172] (0xc001ecf810) (0xc002049540) Create stream I0516 23:59:17.590844 7 log.go:172] (0xc001ecf810) (0xc002049540) Stream added, broadcasting: 1 I0516 23:59:17.597718 7 log.go:172] (0xc001ecf810) Reply frame received for 1 I0516 23:59:17.597751 7 log.go:172] (0xc001ecf810) (0xc001984000) Create stream I0516 23:59:17.597761 7 log.go:172] (0xc001ecf810) (0xc001984000) Stream added, broadcasting: 3 I0516 23:59:17.598618 7 log.go:172] (0xc001ecf810) Reply frame received for 3 I0516 23:59:17.598670 7 log.go:172] (0xc001ecf810) (0xc002048000) Create stream I0516 23:59:17.598684 7 log.go:172] (0xc001ecf810) (0xc002048000) Stream added, broadcasting: 5 I0516 23:59:17.599446 7 log.go:172] (0xc001ecf810) Reply frame received for 5 I0516 23:59:17.666413 7 log.go:172] (0xc001ecf810) Data frame received for 3 I0516 23:59:17.666445 7 log.go:172] (0xc001984000) (3) Data frame handling I0516 23:59:17.666474 7 log.go:172] (0xc001ecf810) Data frame received for 5 I0516 23:59:17.666503 7 log.go:172] (0xc002048000) (5) Data frame handling I0516 23:59:17.668288 7 log.go:172] (0xc001ecf810) Data frame received for 1 I0516 23:59:17.668306 7 log.go:172] (0xc002049540) (1) Data frame handling I0516 23:59:17.668324 7 log.go:172] (0xc002049540) (1) Data frame sent I0516 23:59:17.668356 7 log.go:172] (0xc001ecf810) (0xc002049540) Stream removed, broadcasting: 1 I0516 23:59:17.668427 7 log.go:172] (0xc001ecf810) (0xc002049540) Stream removed, broadcasting: 1 I0516 23:59:17.668446 7 log.go:172] (0xc001ecf810) Go away received I0516 23:59:17.668478 7 log.go:172] (0xc001ecf810) (0xc001984000) Stream removed, broadcasting: 3 I0516 23:59:17.668556 7 log.go:172] (0xc001ecf810) (0xc002048000) Stream removed, broadcasting: 5 STEP: updating the annotation value May 16 23:59:18.178: INFO: Successfully updated pod "var-expansion-529b2842-3b01-4e93-8464-81fd6527789b" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 16 23:59:18.189: INFO: Deleting pod "var-expansion-529b2842-3b01-4e93-8464-81fd6527789b" in namespace "var-expansion-7320" May 16 23:59:18.193: INFO: Wait up to 5m0s for pod "var-expansion-529b2842-3b01-4e93-8464-81fd6527789b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 16 23:59:54.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7320" for this suite. • [SLOW TEST:41.010 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":48,"skipped":908,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 16 23:59:54.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 17 00:00:00.400: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-445 PodName:pod-sharedvolume-08a0bc39-3d5f-4e0b-818a-68bb2201b1b1 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:00:00.400: INFO: >>> kubeConfig: /root/.kube/config I0517 00:00:00.439261 7 log.go:172] (0xc002cac4d0) (0xc002c5b540) Create stream I0517 00:00:00.439307 7 log.go:172] (0xc002cac4d0) (0xc002c5b540) Stream added, broadcasting: 1 I0517 00:00:00.441414 7 log.go:172] (0xc002cac4d0) Reply frame received for 1 I0517 00:00:00.441456 7 log.go:172] (0xc002cac4d0) (0xc001984be0) Create stream I0517 00:00:00.441470 7 log.go:172] (0xc002cac4d0) (0xc001984be0) Stream added, broadcasting: 3 I0517 00:00:00.442582 7 log.go:172] (0xc002cac4d0) Reply frame received for 3 I0517 00:00:00.442618 7 log.go:172] (0xc002cac4d0) (0xc002c5b5e0) Create stream I0517 00:00:00.442631 7 log.go:172] (0xc002cac4d0) (0xc002c5b5e0) Stream added, broadcasting: 5 I0517 00:00:00.443719 7 log.go:172] (0xc002cac4d0) Reply frame received for 5 I0517 00:00:00.517722 7 log.go:172] (0xc002cac4d0) Data frame received for 5 I0517 00:00:00.517757 7 log.go:172] (0xc002c5b5e0) (5) Data frame handling I0517 00:00:00.517780 7 log.go:172] (0xc002cac4d0) Data frame received for 3 I0517 00:00:00.517802 7 log.go:172] (0xc001984be0) (3) Data frame handling I0517 00:00:00.517835 7 log.go:172] (0xc001984be0) (3) Data frame sent I0517 00:00:00.517865 7 log.go:172] (0xc002cac4d0) Data frame received for 3 I0517 00:00:00.517880 7 log.go:172] (0xc001984be0) (3) Data frame handling I0517 00:00:00.519344 7 log.go:172] (0xc002cac4d0) Data frame received for 1 I0517 00:00:00.519388 7 log.go:172] (0xc002c5b540) (1) Data frame handling I0517 00:00:00.519452 7 log.go:172] (0xc002c5b540) (1) Data frame sent I0517 00:00:00.519495 7 log.go:172] (0xc002cac4d0) (0xc002c5b540) Stream removed, broadcasting: 1 I0517 00:00:00.519537 7 log.go:172] (0xc002cac4d0) Go away received I0517 00:00:00.519705 7 log.go:172] (0xc002cac4d0) (0xc002c5b540) Stream removed, broadcasting: 1 I0517 00:00:00.519730 7 log.go:172] (0xc002cac4d0) (0xc001984be0) Stream removed, broadcasting: 3 I0517 00:00:00.519744 7 log.go:172] (0xc002cac4d0) (0xc002c5b5e0) Stream removed, broadcasting: 5 May 17 00:00:00.519: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:00:00.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-445" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":49,"skipped":926,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:00:00.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:00:00.610: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 17 00:00:00.617: INFO: Number of nodes with available pods: 0 May 17 00:00:00.617: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 17 00:00:00.708: INFO: Number of nodes with available pods: 0 May 17 00:00:00.708: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:01.712: INFO: Number of nodes with available pods: 0 May 17 00:00:01.712: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:02.713: INFO: Number of nodes with available pods: 0 May 17 00:00:02.713: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:03.712: INFO: Number of nodes with available pods: 0 May 17 00:00:03.712: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:04.713: INFO: Number of nodes with available pods: 0 May 17 00:00:04.713: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:05.762: INFO: Number of nodes with available pods: 0 May 17 00:00:05.762: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:06.713: INFO: Number of nodes with available pods: 0 May 17 00:00:06.713: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:07.712: INFO: Number of nodes with available pods: 1 May 17 00:00:07.712: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 17 00:00:07.751: INFO: Number of nodes with available pods: 1 May 17 00:00:07.751: INFO: Number of running nodes: 0, number of available pods: 1 May 17 00:00:08.796: INFO: Number of nodes with available pods: 0 May 17 00:00:08.796: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 17 00:00:08.810: INFO: Number of nodes with available pods: 0 May 17 00:00:08.810: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:09.815: INFO: Number of nodes with available pods: 0 May 17 00:00:09.815: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:10.814: INFO: Number of nodes with available pods: 0 May 17 00:00:10.814: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:11.815: INFO: Number of nodes with available pods: 0 May 17 00:00:11.815: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:12.814: INFO: Number of nodes with available pods: 0 May 17 00:00:12.814: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:13.814: INFO: Number of nodes with available pods: 0 May 17 00:00:13.814: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:14.814: INFO: Number of nodes with available pods: 0 May 17 00:00:14.814: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:15.814: INFO: Number of nodes with available pods: 0 May 17 00:00:15.814: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:16.815: INFO: Number of nodes with available pods: 0 May 17 00:00:16.815: INFO: Node latest-worker is running more than one daemon pod May 17 00:00:17.825: INFO: Number of nodes with available pods: 1 May 17 00:00:17.825: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9415, will wait for the garbage collector to delete the pods May 17 00:00:17.889: INFO: Deleting DaemonSet.extensions daemon-set took: 6.664771ms May 17 00:00:18.189: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.234211ms May 17 00:00:24.903: INFO: Number of nodes with available pods: 0 May 17 00:00:24.903: INFO: Number of running nodes: 0, number of available pods: 0 May 17 00:00:24.908: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9415/daemonsets","resourceVersion":"5277625"},"items":null} May 17 00:00:24.911: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9415/pods","resourceVersion":"5277625"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:00:24.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9415" for this suite. • [SLOW TEST:24.420 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":50,"skipped":937,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:00:24.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:00:25.673: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:00:27.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270425, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270425, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270425, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270425, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:00:30.733: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:00:30.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:00:31.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8338" for this suite. STEP: Destroying namespace "webhook-8338-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.090 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":51,"skipped":953,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:00:32.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-8762/secret-test-4fac459f-edb0-4b3b-876b-33989a6943e6 STEP: Creating a pod to test consume secrets May 17 00:00:32.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-8816dfb0-0a48-4502-ae28-80a4d9bfa509" in namespace "secrets-8762" to be "Succeeded or Failed" May 17 00:00:32.165: INFO: Pod "pod-configmaps-8816dfb0-0a48-4502-ae28-80a4d9bfa509": Phase="Pending", Reason="", readiness=false. Elapsed: 12.175554ms May 17 00:00:34.309: INFO: Pod "pod-configmaps-8816dfb0-0a48-4502-ae28-80a4d9bfa509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155701261s May 17 00:00:36.359: INFO: Pod "pod-configmaps-8816dfb0-0a48-4502-ae28-80a4d9bfa509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205839512s STEP: Saw pod success May 17 00:00:36.359: INFO: Pod "pod-configmaps-8816dfb0-0a48-4502-ae28-80a4d9bfa509" satisfied condition "Succeeded or Failed" May 17 00:00:36.362: INFO: Trying to get logs from node latest-worker pod pod-configmaps-8816dfb0-0a48-4502-ae28-80a4d9bfa509 container env-test: STEP: delete the pod May 17 00:00:36.400: INFO: Waiting for pod pod-configmaps-8816dfb0-0a48-4502-ae28-80a4d9bfa509 to disappear May 17 00:00:36.420: INFO: Pod pod-configmaps-8816dfb0-0a48-4502-ae28-80a4d9bfa509 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:00:36.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8762" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":953,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:00:36.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 17 00:00:36.663: INFO: >>> kubeConfig: /root/.kube/config May 17 00:00:39.627: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:00:50.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8913" for this suite. • [SLOW TEST:13.983 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":53,"skipped":965,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:00:50.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:00:50.542: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b0b264c-2d01-4669-b7f0-ca234590dcb2" in namespace "projected-2063" to be "Succeeded or Failed" May 17 00:00:50.547: INFO: Pod "downwardapi-volume-8b0b264c-2d01-4669-b7f0-ca234590dcb2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.379078ms May 17 00:00:52.553: INFO: Pod "downwardapi-volume-8b0b264c-2d01-4669-b7f0-ca234590dcb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011857265s May 17 00:00:54.557: INFO: Pod "downwardapi-volume-8b0b264c-2d01-4669-b7f0-ca234590dcb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015432018s STEP: Saw pod success May 17 00:00:54.557: INFO: Pod "downwardapi-volume-8b0b264c-2d01-4669-b7f0-ca234590dcb2" satisfied condition "Succeeded or Failed" May 17 00:00:54.560: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8b0b264c-2d01-4669-b7f0-ca234590dcb2 container client-container: STEP: delete the pod May 17 00:00:54.703: INFO: Waiting for pod downwardapi-volume-8b0b264c-2d01-4669-b7f0-ca234590dcb2 to disappear May 17 00:00:54.718: INFO: Pod downwardapi-volume-8b0b264c-2d01-4669-b7f0-ca234590dcb2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:00:54.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2063" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":54,"skipped":970,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:00:54.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 17 00:01:01.302: INFO: Successfully updated pod "adopt-release-69bfn" STEP: Checking that the Job readopts the Pod May 17 00:01:01.302: INFO: Waiting up to 15m0s for pod "adopt-release-69bfn" in namespace "job-1579" to be "adopted" May 17 00:01:01.320: INFO: Pod "adopt-release-69bfn": Phase="Running", Reason="", readiness=true. Elapsed: 17.774156ms May 17 00:01:03.323: INFO: Pod "adopt-release-69bfn": Phase="Running", Reason="", readiness=true. Elapsed: 2.021476382s May 17 00:01:03.323: INFO: Pod "adopt-release-69bfn" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 17 00:01:03.835: INFO: Successfully updated pod "adopt-release-69bfn" STEP: Checking that the Job releases the Pod May 17 00:01:03.835: INFO: Waiting up to 15m0s for pod "adopt-release-69bfn" in namespace "job-1579" to be "released" May 17 00:01:03.841: INFO: Pod "adopt-release-69bfn": Phase="Running", Reason="", readiness=true. Elapsed: 5.874384ms May 17 00:01:05.845: INFO: Pod "adopt-release-69bfn": Phase="Running", Reason="", readiness=true. Elapsed: 2.009787639s May 17 00:01:05.845: INFO: Pod "adopt-release-69bfn" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:01:05.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1579" for this suite. • [SLOW TEST:11.128 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":55,"skipped":972,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:01:05.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 17 00:01:06.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9371' May 17 00:01:06.405: INFO: stderr: "" May 17 00:01:06.405: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 17 00:01:06.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9371' May 17 00:01:14.853: INFO: stderr: "" May 17 00:01:14.853: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:01:14.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9371" for this suite. • [SLOW TEST:9.007 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":56,"skipped":975,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:01:14.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 17 00:01:14.946: INFO: Waiting up to 5m0s for pod "pod-569085dd-e64b-4c0c-a7ec-d86b85689570" in namespace "emptydir-7248" to be "Succeeded or Failed" May 17 00:01:14.955: INFO: Pod "pod-569085dd-e64b-4c0c-a7ec-d86b85689570": Phase="Pending", Reason="", readiness=false. Elapsed: 9.076387ms May 17 00:01:16.974: INFO: Pod "pod-569085dd-e64b-4c0c-a7ec-d86b85689570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027780581s May 17 00:01:19.000: INFO: Pod "pod-569085dd-e64b-4c0c-a7ec-d86b85689570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053518052s STEP: Saw pod success May 17 00:01:19.000: INFO: Pod "pod-569085dd-e64b-4c0c-a7ec-d86b85689570" satisfied condition "Succeeded or Failed" May 17 00:01:19.003: INFO: Trying to get logs from node latest-worker2 pod pod-569085dd-e64b-4c0c-a7ec-d86b85689570 container test-container: STEP: delete the pod May 17 00:01:19.050: INFO: Waiting for pod pod-569085dd-e64b-4c0c-a7ec-d86b85689570 to disappear May 17 00:01:19.063: INFO: Pod pod-569085dd-e64b-4c0c-a7ec-d86b85689570 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:01:19.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7248" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":57,"skipped":992,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:01:19.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 17 00:01:23.307: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:01:23.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5237" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":1009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:01:23.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:01:24.307: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:01:26.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270484, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270484, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270484, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270484, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:01:29.355: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:01:29.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4848" for this suite. STEP: Destroying namespace "webhook-4848-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.143 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":59,"skipped":1052,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:01:29.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:01:29.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db9dabec-094c-46d1-a0b3-a71bcead8800" in namespace "projected-6451" to be "Succeeded or Failed" May 17 00:01:29.843: INFO: Pod "downwardapi-volume-db9dabec-094c-46d1-a0b3-a71bcead8800": Phase="Pending", Reason="", readiness=false. Elapsed: 3.740539ms May 17 00:01:31.849: INFO: Pod "downwardapi-volume-db9dabec-094c-46d1-a0b3-a71bcead8800": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00965169s May 17 00:01:33.861: INFO: Pod "downwardapi-volume-db9dabec-094c-46d1-a0b3-a71bcead8800": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022091916s STEP: Saw pod success May 17 00:01:33.861: INFO: Pod "downwardapi-volume-db9dabec-094c-46d1-a0b3-a71bcead8800" satisfied condition "Succeeded or Failed" May 17 00:01:33.864: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-db9dabec-094c-46d1-a0b3-a71bcead8800 container client-container: STEP: delete the pod May 17 00:01:33.922: INFO: Waiting for pod downwardapi-volume-db9dabec-094c-46d1-a0b3-a71bcead8800 to disappear May 17 00:01:33.963: INFO: Pod downwardapi-volume-db9dabec-094c-46d1-a0b3-a71bcead8800 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:01:33.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6451" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":60,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:01:33.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-7dab02c1-ca1d-47ba-852c-12479ccbdb97 STEP: Creating a pod to test consume secrets May 17 00:01:34.048: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f85aa387-73c6-4e00-9dd2-70179f18554c" in namespace "projected-927" to be "Succeeded or Failed" May 17 00:01:34.108: INFO: Pod "pod-projected-secrets-f85aa387-73c6-4e00-9dd2-70179f18554c": Phase="Pending", Reason="", readiness=false. Elapsed: 59.301736ms May 17 00:01:36.112: INFO: Pod "pod-projected-secrets-f85aa387-73c6-4e00-9dd2-70179f18554c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063543823s May 17 00:01:38.116: INFO: Pod "pod-projected-secrets-f85aa387-73c6-4e00-9dd2-70179f18554c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067374695s STEP: Saw pod success May 17 00:01:38.116: INFO: Pod "pod-projected-secrets-f85aa387-73c6-4e00-9dd2-70179f18554c" satisfied condition "Succeeded or Failed" May 17 00:01:38.119: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-f85aa387-73c6-4e00-9dd2-70179f18554c container projected-secret-volume-test: STEP: delete the pod May 17 00:01:38.324: INFO: Waiting for pod pod-projected-secrets-f85aa387-73c6-4e00-9dd2-70179f18554c to disappear May 17 00:01:38.376: INFO: Pod pod-projected-secrets-f85aa387-73c6-4e00-9dd2-70179f18554c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:01:38.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-927" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":1105,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:01:38.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:01:45.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2725" for this suite. • [SLOW TEST:7.094 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":62,"skipped":1105,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:01:45.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5070 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5070 STEP: creating replication controller externalsvc in namespace services-5070 I0517 00:01:45.772511 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5070, replica count: 2 I0517 00:01:48.822896 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:01:51.823169 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 17 00:01:51.888: INFO: Creating new exec pod May 17 00:01:55.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5070 execpod2hmvd -- /bin/sh -x -c nslookup clusterip-service' May 17 00:01:56.277: INFO: stderr: "I0517 00:01:56.133676 2095 log.go:172] (0xc000a06c60) (0xc000614fa0) Create stream\nI0517 00:01:56.133739 2095 log.go:172] (0xc000a06c60) (0xc000614fa0) Stream added, broadcasting: 1\nI0517 00:01:56.136494 2095 log.go:172] (0xc000a06c60) Reply frame received for 1\nI0517 00:01:56.136551 2095 log.go:172] (0xc000a06c60) (0xc0005550e0) Create stream\nI0517 00:01:56.136581 2095 log.go:172] (0xc000a06c60) (0xc0005550e0) Stream added, broadcasting: 3\nI0517 00:01:56.137829 2095 log.go:172] (0xc000a06c60) Reply frame received for 3\nI0517 00:01:56.137867 2095 log.go:172] (0xc000a06c60) (0xc000555400) Create stream\nI0517 00:01:56.137881 2095 log.go:172] (0xc000a06c60) (0xc000555400) Stream added, broadcasting: 5\nI0517 00:01:56.139097 2095 log.go:172] (0xc000a06c60) Reply frame received for 5\nI0517 00:01:56.235588 2095 log.go:172] (0xc000a06c60) Data frame received for 5\nI0517 00:01:56.235632 2095 log.go:172] (0xc000555400) (5) Data frame handling\nI0517 00:01:56.235660 2095 log.go:172] (0xc000555400) (5) Data frame sent\n+ nslookup clusterip-service\nI0517 00:01:56.269311 2095 log.go:172] (0xc000a06c60) Data frame received for 3\nI0517 00:01:56.269353 2095 log.go:172] (0xc0005550e0) (3) Data frame handling\nI0517 00:01:56.269370 2095 log.go:172] (0xc0005550e0) (3) Data frame sent\nI0517 00:01:56.270090 2095 log.go:172] (0xc000a06c60) Data frame received for 3\nI0517 00:01:56.270106 2095 log.go:172] (0xc0005550e0) (3) Data frame handling\nI0517 00:01:56.270121 2095 log.go:172] (0xc0005550e0) (3) Data frame sent\nI0517 00:01:56.270732 2095 log.go:172] (0xc000a06c60) Data frame received for 3\nI0517 00:01:56.270793 2095 log.go:172] (0xc0005550e0) (3) Data frame handling\nI0517 00:01:56.270886 2095 log.go:172] (0xc000a06c60) Data frame received for 5\nI0517 00:01:56.270910 2095 log.go:172] (0xc000555400) (5) Data frame handling\nI0517 00:01:56.272623 2095 log.go:172] (0xc000a06c60) Data frame received for 1\nI0517 00:01:56.272657 2095 log.go:172] (0xc000614fa0) (1) Data frame handling\nI0517 00:01:56.272682 2095 log.go:172] (0xc000614fa0) (1) Data frame sent\nI0517 00:01:56.272715 2095 log.go:172] (0xc000a06c60) (0xc000614fa0) Stream removed, broadcasting: 1\nI0517 00:01:56.272969 2095 log.go:172] (0xc000a06c60) Go away received\nI0517 00:01:56.273382 2095 log.go:172] (0xc000a06c60) (0xc000614fa0) Stream removed, broadcasting: 1\nI0517 00:01:56.273403 2095 log.go:172] (0xc000a06c60) (0xc0005550e0) Stream removed, broadcasting: 3\nI0517 00:01:56.273416 2095 log.go:172] (0xc000a06c60) (0xc000555400) Stream removed, broadcasting: 5\n" May 17 00:01:56.277: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5070.svc.cluster.local\tcanonical name = externalsvc.services-5070.svc.cluster.local.\nName:\texternalsvc.services-5070.svc.cluster.local\nAddress: 10.98.132.75\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5070, will wait for the garbage collector to delete the pods May 17 00:01:56.356: INFO: Deleting ReplicationController externalsvc took: 24.390153ms May 17 00:01:56.656: INFO: Terminating ReplicationController externalsvc pods took: 300.269966ms May 17 00:02:04.997: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:02:05.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5070" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.526 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":63,"skipped":1106,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:02:05.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 17 00:02:05.127: INFO: Waiting up to 5m0s for pod "pod-73a4de1e-b38e-4916-bbda-209b363324fc" in namespace "emptydir-8162" to be "Succeeded or Failed" May 17 00:02:05.131: INFO: Pod "pod-73a4de1e-b38e-4916-bbda-209b363324fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.725765ms May 17 00:02:07.135: INFO: Pod "pod-73a4de1e-b38e-4916-bbda-209b363324fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007893498s May 17 00:02:09.139: INFO: Pod "pod-73a4de1e-b38e-4916-bbda-209b363324fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011912046s STEP: Saw pod success May 17 00:02:09.139: INFO: Pod "pod-73a4de1e-b38e-4916-bbda-209b363324fc" satisfied condition "Succeeded or Failed" May 17 00:02:09.142: INFO: Trying to get logs from node latest-worker2 pod pod-73a4de1e-b38e-4916-bbda-209b363324fc container test-container: STEP: delete the pod May 17 00:02:09.163: INFO: Waiting for pod pod-73a4de1e-b38e-4916-bbda-209b363324fc to disappear May 17 00:02:09.227: INFO: Pod pod-73a4de1e-b38e-4916-bbda-209b363324fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:02:09.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8162" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":64,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:02:09.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 17 00:02:09.394: INFO: Waiting up to 5m0s for pod "pod-09793d44-37f6-4d5a-9551-33be338cd64b" in namespace "emptydir-5808" to be "Succeeded or Failed" May 17 00:02:09.419: INFO: Pod "pod-09793d44-37f6-4d5a-9551-33be338cd64b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.408636ms May 17 00:02:11.512: INFO: Pod "pod-09793d44-37f6-4d5a-9551-33be338cd64b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118642714s May 17 00:02:13.517: INFO: Pod "pod-09793d44-37f6-4d5a-9551-33be338cd64b": Phase="Running", Reason="", readiness=true. Elapsed: 4.12359145s May 17 00:02:15.522: INFO: Pod "pod-09793d44-37f6-4d5a-9551-33be338cd64b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128101512s STEP: Saw pod success May 17 00:02:15.522: INFO: Pod "pod-09793d44-37f6-4d5a-9551-33be338cd64b" satisfied condition "Succeeded or Failed" May 17 00:02:15.525: INFO: Trying to get logs from node latest-worker2 pod pod-09793d44-37f6-4d5a-9551-33be338cd64b container test-container: STEP: delete the pod May 17 00:02:15.574: INFO: Waiting for pod pod-09793d44-37f6-4d5a-9551-33be338cd64b to disappear May 17 00:02:15.587: INFO: Pod pod-09793d44-37f6-4d5a-9551-33be338cd64b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:02:15.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5808" for this suite. • [SLOW TEST:6.361 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:02:15.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 17 00:02:15.714: INFO: Waiting up to 5m0s for pod "var-expansion-395bd51e-ee23-4d25-aaba-53e8bfeb07fd" in namespace "var-expansion-6066" to be "Succeeded or Failed" May 17 00:02:15.747: INFO: Pod "var-expansion-395bd51e-ee23-4d25-aaba-53e8bfeb07fd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.741964ms May 17 00:02:17.760: INFO: Pod "var-expansion-395bd51e-ee23-4d25-aaba-53e8bfeb07fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046185135s May 17 00:02:19.764: INFO: Pod "var-expansion-395bd51e-ee23-4d25-aaba-53e8bfeb07fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050064872s STEP: Saw pod success May 17 00:02:19.764: INFO: Pod "var-expansion-395bd51e-ee23-4d25-aaba-53e8bfeb07fd" satisfied condition "Succeeded or Failed" May 17 00:02:19.767: INFO: Trying to get logs from node latest-worker2 pod var-expansion-395bd51e-ee23-4d25-aaba-53e8bfeb07fd container dapi-container: STEP: delete the pod May 17 00:02:19.965: INFO: Waiting for pod var-expansion-395bd51e-ee23-4d25-aaba-53e8bfeb07fd to disappear May 17 00:02:20.030: INFO: Pod var-expansion-395bd51e-ee23-4d25-aaba-53e8bfeb07fd no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:02:20.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6066" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":66,"skipped":1170,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:02:20.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:02:20.821: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:02:22.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270540, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270540, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270540, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270540, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:02:25.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:02:25.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-633-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:02:27.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4818" for this suite. STEP: Destroying namespace "webhook-4818-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.089 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":67,"skipped":1186,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:02:27.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 17 00:02:27.272: INFO: Waiting up to 5m0s for pod "client-containers-d8a9125a-8fa4-4a8f-a012-91debe531ba4" in namespace "containers-6912" to be "Succeeded or Failed" May 17 00:02:27.561: INFO: Pod "client-containers-d8a9125a-8fa4-4a8f-a012-91debe531ba4": Phase="Pending", Reason="", readiness=false. Elapsed: 289.109082ms May 17 00:02:29.564: INFO: Pod "client-containers-d8a9125a-8fa4-4a8f-a012-91debe531ba4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292041967s May 17 00:02:31.568: INFO: Pod "client-containers-d8a9125a-8fa4-4a8f-a012-91debe531ba4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.296254104s STEP: Saw pod success May 17 00:02:31.568: INFO: Pod "client-containers-d8a9125a-8fa4-4a8f-a012-91debe531ba4" satisfied condition "Succeeded or Failed" May 17 00:02:31.572: INFO: Trying to get logs from node latest-worker pod client-containers-d8a9125a-8fa4-4a8f-a012-91debe531ba4 container test-container: STEP: delete the pod May 17 00:02:31.611: INFO: Waiting for pod client-containers-d8a9125a-8fa4-4a8f-a012-91debe531ba4 to disappear May 17 00:02:31.623: INFO: Pod client-containers-d8a9125a-8fa4-4a8f-a012-91debe531ba4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:02:31.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6912" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":68,"skipped":1195,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:02:31.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:02:31.715: INFO: Creating deployment "webserver-deployment" May 17 00:02:31.726: INFO: Waiting for observed generation 1 May 17 00:02:33.742: INFO: Waiting for all required pods to come up May 17 00:02:33.747: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 17 00:02:43.756: INFO: Waiting for deployment "webserver-deployment" to complete May 17 00:02:43.803: INFO: Updating deployment "webserver-deployment" with a non-existent image May 17 00:02:43.810: INFO: Updating deployment webserver-deployment May 17 00:02:43.810: INFO: Waiting for observed generation 2 May 17 00:02:45.959: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 17 00:02:45.961: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 17 00:02:45.963: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 17 00:02:45.969: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 17 00:02:45.969: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 17 00:02:45.970: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 17 00:02:45.974: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 17 00:02:45.974: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 17 00:02:45.979: INFO: Updating deployment webserver-deployment May 17 00:02:45.979: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 17 00:02:46.444: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 17 00:02:46.499: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 17 00:02:46.767: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7074 /apis/apps/v1/namespaces/deployment-7074/deployments/webserver-deployment aa771e2c-d25a-4a1d-93fd-86faeccf15d8 5278942 3 2020-05-17 00:02:31 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-17 00:02:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001cc3488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-17 00:02:44 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-17 00:02:46 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 17 00:02:46.906: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-7074 /apis/apps/v1/namespaces/deployment-7074/replicasets/webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 5278996 3 2020-05-17 00:02:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment aa771e2c-d25a-4a1d-93fd-86faeccf15d8 0xc004735c57 0xc004735c58}] [] [{kube-controller-manager Update apps/v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aa771e2c-d25a-4a1d-93fd-86faeccf15d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004735ce8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 17 00:02:46.906: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 17 00:02:46.906: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-7074 /apis/apps/v1/namespaces/deployment-7074/replicasets/webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 5278981 3 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment aa771e2c-d25a-4a1d-93fd-86faeccf15d8 0xc004735d67 0xc004735d68}] [] [{kube-controller-manager Update apps/v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aa771e2c-d25a-4a1d-93fd-86faeccf15d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004735de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 17 00:02:47.000: INFO: Pod "webserver-deployment-6676bcd6d4-264vt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-264vt webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-264vt eeaffc6c-f48e-4ed2-b15b-187bede23f11 5278973 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc001cc3b37 0xc001cc3b38}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.001: INFO: Pod "webserver-deployment-6676bcd6d4-2wsrk" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2wsrk webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-2wsrk 348262d5-f045-4491-95a6-3c0eaf000579 5278971 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc001cc3f17 0xc001cc3f18}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.001: INFO: Pod "webserver-deployment-6676bcd6d4-6c26b" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6c26b webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-6c26b e55c0170-7678-4fb2-a8c9-fc345b2abe73 5278883 0 2020-05-17 00:02:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc00084e2c7 0xc00084e2c8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-17 00:02:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.001: INFO: Pod "webserver-deployment-6676bcd6d4-fxt9q" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fxt9q webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-fxt9q 358796a4-0bb3-40ab-af28-586a7868deb0 5278936 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc00084ece7 0xc00084ece8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.002: INFO: Pod "webserver-deployment-6676bcd6d4-gds9c" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gds9c webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-gds9c a9b76a52-b445-4e06-8d86-bed02adec9b1 5278972 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc00084f127 0xc00084f128}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.002: INFO: Pod "webserver-deployment-6676bcd6d4-gldw9" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gldw9 webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-gldw9 4a371fe1-e14d-43ed-b6ca-fbeca562ad24 5278911 0 2020-05-17 00:02:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc00084f707 0xc00084f708}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-17 00:02:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.002: INFO: Pod "webserver-deployment-6676bcd6d4-jgdjj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jgdjj webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-jgdjj abb25f6b-cd7a-48b3-8aef-a359553601eb 5278969 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc00084fe97 0xc00084fe98}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.002: INFO: Pod "webserver-deployment-6676bcd6d4-n9mh5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-n9mh5 webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-n9mh5 a0fbf13f-8ca8-42cd-b1b7-e0b2da0abb73 5278960 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc0005b6727 0xc0005b6728}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.002: INFO: Pod "webserver-deployment-6676bcd6d4-nnm4n" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nnm4n webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-nnm4n d0c80357-618f-43be-a762-12811c8ef63e 5278958 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc0005b7507 0xc0005b7508}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.003: INFO: Pod "webserver-deployment-6676bcd6d4-w4v2b" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-w4v2b webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-w4v2b ed5db824-8820-40f2-acc1-4247eb68beb3 5278897 0 2020-05-17 00:02:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc0005b7ff7 0xc0005b7ff8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-17 00:02:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.003: INFO: Pod "webserver-deployment-6676bcd6d4-wsrt2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wsrt2 webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-wsrt2 9ed3981f-e62e-4a0d-b644-0ef2ff3cc60c 5278909 0 2020-05-17 00:02:44 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc00039bff7 0xc00039bff8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-17 00:02:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.003: INFO: Pod "webserver-deployment-6676bcd6d4-wzpvp" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wzpvp webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-wzpvp 7a6bc732-f24e-499d-b580-9f4bc592568f 5278887 0 2020-05-17 00:02:43 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc0007e6af7 0xc0007e6af8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-17 00:02:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.003: INFO: Pod "webserver-deployment-6676bcd6d4-zmxsb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zmxsb webserver-deployment-6676bcd6d4- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-6676bcd6d4-zmxsb 1e0068d6-bfbc-4111-a2b9-ffe49a2edcaf 5278992 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d798fada-beab-4737-9a62-6416f5112a1a 0xc0007e7177 0xc0007e7178}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d798fada-beab-4737-9a62-6416f5112a1a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.004: INFO: Pod "webserver-deployment-84855cf797-44qnh" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-44qnh webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-44qnh e3913f93-49b9-447f-ad03-6858f411a28d 5278816 0 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc0006c6127 0xc0006c6128}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.137\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.137,StartTime:2020-05-17 00:02:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:02:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://97ece50a077f339629dd582e1e9a02eecd2d8858683b66bac4222b15fcead5d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.004: INFO: Pod "webserver-deployment-84855cf797-4hv64" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4hv64 webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-4hv64 0cf67f14-af08-4d88-8505-e7714027bdcb 5278845 0 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc0006c6607 0xc0006c6608}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.140\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.140,StartTime:2020-05-17 00:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:02:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eaee13791f08ded2ea5fef1acac51aa407c8911de0f0b131a09bc025863675f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.004: INFO: Pod "webserver-deployment-84855cf797-5cwbc" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5cwbc webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-5cwbc 9d651d86-aa9f-4411-b54a-db66466252e8 5278801 0 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc0006c6dd7 0xc0006c6dd8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.83,StartTime:2020-05-17 00:02:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:02:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6890028714b6d36f489de4f71c3f3b97fe717725aef6014fa177781685376a67,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.004: INFO: Pod "webserver-deployment-84855cf797-5jlws" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5jlws webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-5jlws 7df816ff-8185-4ba8-b623-247bc485eb02 5278954 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc0006c73e7 0xc0006c73e8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.004: INFO: Pod "webserver-deployment-84855cf797-5sj29" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5sj29 webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-5sj29 048c84b8-45a2-470f-bb47-173d3ef7449e 5278975 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc0006c7617 0xc0006c7618}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.004: INFO: Pod "webserver-deployment-84855cf797-5w5bs" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5w5bs webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-5w5bs 62a0e0aa-e3c5-4cb7-8f4d-9e3b0d367407 5278976 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc0006c7967 0xc0006c7968}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.005: INFO: Pod "webserver-deployment-84855cf797-6gs6h" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6gs6h webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-6gs6h d97739a8-f86b-4d7d-a770-dc2d8fa42d6f 5278961 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc0006c7f77 0xc0006c7f78}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.005: INFO: Pod "webserver-deployment-84855cf797-bmg8p" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bmg8p webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-bmg8p c04d3b00-27fd-4f27-904c-be6379dae10b 5278832 0 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00033f3c7 0xc00033f3c8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.85\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.85,StartTime:2020-05-17 00:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:02:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5f6463097f7e3b8c69a734974074b1a78f33945174bdd37ff360b08511a065c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.005: INFO: Pod "webserver-deployment-84855cf797-cmk52" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cmk52 webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-cmk52 38949437-901b-4153-a72c-0656474439cc 5278963 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00033f967 0xc00033f968}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.005: INFO: Pod "webserver-deployment-84855cf797-cs4ct" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cs4ct webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-cs4ct 48e34e08-3320-4b7f-9df8-11783aada378 5278974 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00033fc47 0xc00033fc48}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.006: INFO: Pod "webserver-deployment-84855cf797-gdnzj" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gdnzj webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-gdnzj 35bf4811-1f1b-43a7-adc5-71f9faabefa9 5278841 0 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00033ff57 0xc00033ff58}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.138\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.138,StartTime:2020-05-17 00:02:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:02:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a1c663d2c546d0ffc09822d224686cd2da98f7ac6184de5ab55f9e9a52ac3eb7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.006: INFO: Pod "webserver-deployment-84855cf797-gg5w5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gg5w5 webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-gg5w5 104843cd-bdb5-4dc5-81f8-5e75aacb73d6 5278959 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094c397 0xc00094c398}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.006: INFO: Pod "webserver-deployment-84855cf797-jnx4d" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jnx4d webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-jnx4d 97a9bd69-f553-48fb-9433-b6bd5a0a5715 5278956 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094c617 0xc00094c618}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.006: INFO: Pod "webserver-deployment-84855cf797-kvj4p" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kvj4p webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-kvj4p e7045d55-2df4-459a-9791-23d4fc8a43cb 5278982 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094c7e7 0xc00094c7e8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-17 00:02:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.007: INFO: Pod "webserver-deployment-84855cf797-mkk75" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mkk75 webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-mkk75 b1252a3a-0df0-4b00-aef2-4088d2f64281 5278833 0 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094cb17 0xc00094cb18}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.139,StartTime:2020-05-17 00:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:02:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed0ca59b70fa442deaebc904338fed89c6267f2d76db5da8643185a697f474da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.007: INFO: Pod "webserver-deployment-84855cf797-rc7rs" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rc7rs webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-rc7rs 1678c2e8-e902-4524-a0af-5b1ffcb83dd7 5278787 0 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094cde7 0xc00094cde8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.136\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.136,StartTime:2020-05-17 00:02:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:02:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6365ede7c1e72ed68a874a6bee301fe5a21993c114321817e4646757978be90c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.007: INFO: Pod "webserver-deployment-84855cf797-tchfv" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tchfv webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-tchfv e3eeddba-422b-430a-b33a-20893048ab46 5278812 0 2020-05-17 00:02:31 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094d097 0xc00094d098}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.84,StartTime:2020-05-17 00:02:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:02:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a71dd09a16da1b49fc0d2ebdbab1c466c71aa2ccb40d7162939a7d5a6d2d6134,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.007: INFO: Pod "webserver-deployment-84855cf797-tq9t2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tq9t2 webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-tq9t2 87414283-b3c9-49ad-af7a-4e7a9e3dccd2 5278993 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094d3d7 0xc00094d3d8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-17 00:02:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.008: INFO: Pod "webserver-deployment-84855cf797-wcln7" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wcln7 webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-wcln7 65a4585d-e914-4417-91e4-49f9a6f68f55 5278934 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094d667 0xc00094d668}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:02:47.008: INFO: Pod "webserver-deployment-84855cf797-whh28" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-whh28 webserver-deployment-84855cf797- deployment-7074 /api/v1/namespaces/deployment-7074/pods/webserver-deployment-84855cf797-whh28 ef1cde5c-356f-4bb4-9bf5-c6cfcafeaf9a 5278962 0 2020-05-17 00:02:46 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1210098f-1915-4ff0-a956-6399e8e7e7d3 0xc00094d7d7 0xc00094d7d8}] [] [{kube-controller-manager Update v1 2020-05-17 00:02:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1210098f-1915-4ff0-a956-6399e8e7e7d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k96jt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k96jt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k96jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:02:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:02:47.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7074" for this suite. • [SLOW TEST:15.661 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":69,"skipped":1199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:02:47.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-1145d258-edf5-40bd-95ee-b85df6c916d1 STEP: updating the pod May 17 00:03:10.184: INFO: Successfully updated pod "var-expansion-1145d258-edf5-40bd-95ee-b85df6c916d1" STEP: waiting for pod and container restart STEP: Failing liveness probe May 17 00:03:10.210: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-4898 PodName:var-expansion-1145d258-edf5-40bd-95ee-b85df6c916d1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:03:10.210: INFO: >>> kubeConfig: /root/.kube/config I0517 00:03:10.244454 7 log.go:172] (0xc0024f20b0) (0xc001f77180) Create stream I0517 00:03:10.244488 7 log.go:172] (0xc0024f20b0) (0xc001f77180) Stream added, broadcasting: 1 I0517 00:03:10.246553 7 log.go:172] (0xc0024f20b0) Reply frame received for 1 I0517 00:03:10.246598 7 log.go:172] (0xc0024f20b0) (0xc00260a140) Create stream I0517 00:03:10.246620 7 log.go:172] (0xc0024f20b0) (0xc00260a140) Stream added, broadcasting: 3 I0517 00:03:10.247329 7 log.go:172] (0xc0024f20b0) Reply frame received for 3 I0517 00:03:10.247365 7 log.go:172] (0xc0024f20b0) (0xc001f772c0) Create stream I0517 00:03:10.247381 7 log.go:172] (0xc0024f20b0) (0xc001f772c0) Stream added, broadcasting: 5 I0517 00:03:10.248050 7 log.go:172] (0xc0024f20b0) Reply frame received for 5 I0517 00:03:10.611931 7 log.go:172] (0xc0024f20b0) Data frame received for 5 I0517 00:03:10.611984 7 log.go:172] (0xc001f772c0) (5) Data frame handling I0517 00:03:10.612031 7 log.go:172] (0xc0024f20b0) Data frame received for 3 I0517 00:03:10.612059 7 log.go:172] (0xc00260a140) (3) Data frame handling I0517 00:03:10.617630 7 log.go:172] (0xc0024f20b0) Data frame received for 1 I0517 00:03:10.617660 7 log.go:172] (0xc001f77180) (1) Data frame handling I0517 00:03:10.617693 7 log.go:172] (0xc001f77180) (1) Data frame sent I0517 00:03:10.617722 7 log.go:172] (0xc0024f20b0) (0xc001f77180) Stream removed, broadcasting: 1 I0517 00:03:10.617756 7 log.go:172] (0xc0024f20b0) Go away received I0517 00:03:10.617883 7 log.go:172] (0xc0024f20b0) (0xc001f77180) Stream removed, broadcasting: 1 I0517 00:03:10.617911 7 log.go:172] (0xc0024f20b0) (0xc00260a140) Stream removed, broadcasting: 3 I0517 00:03:10.617936 7 log.go:172] (0xc0024f20b0) (0xc001f772c0) Stream removed, broadcasting: 5 May 17 00:03:10.617: INFO: Pod exec output: / STEP: Waiting for container to restart May 17 00:03:10.678: INFO: Container dapi-container, restarts: 0 May 17 00:03:20.683: INFO: Container dapi-container, restarts: 0 May 17 00:03:30.683: INFO: Container dapi-container, restarts: 0 May 17 00:03:40.695: INFO: Container dapi-container, restarts: 0 May 17 00:03:50.682: INFO: Container dapi-container, restarts: 1 May 17 00:03:50.682: INFO: Container has restart count: 1 STEP: Rewriting the file May 17 00:03:50.682: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-4898 PodName:var-expansion-1145d258-edf5-40bd-95ee-b85df6c916d1 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:03:50.682: INFO: >>> kubeConfig: /root/.kube/config I0517 00:03:50.716131 7 log.go:172] (0xc0037662c0) (0xc001610640) Create stream I0517 00:03:50.716157 7 log.go:172] (0xc0037662c0) (0xc001610640) Stream added, broadcasting: 1 I0517 00:03:50.718669 7 log.go:172] (0xc0037662c0) Reply frame received for 1 I0517 00:03:50.718717 7 log.go:172] (0xc0037662c0) (0xc002126b40) Create stream I0517 00:03:50.718733 7 log.go:172] (0xc0037662c0) (0xc002126b40) Stream added, broadcasting: 3 I0517 00:03:50.719739 7 log.go:172] (0xc0037662c0) Reply frame received for 3 I0517 00:03:50.719793 7 log.go:172] (0xc0037662c0) (0xc002126be0) Create stream I0517 00:03:50.719820 7 log.go:172] (0xc0037662c0) (0xc002126be0) Stream added, broadcasting: 5 I0517 00:03:50.720726 7 log.go:172] (0xc0037662c0) Reply frame received for 5 I0517 00:03:50.797980 7 log.go:172] (0xc0037662c0) Data frame received for 5 I0517 00:03:50.798027 7 log.go:172] (0xc002126be0) (5) Data frame handling I0517 00:03:50.798062 7 log.go:172] (0xc0037662c0) Data frame received for 3 I0517 00:03:50.798080 7 log.go:172] (0xc002126b40) (3) Data frame handling I0517 00:03:50.799148 7 log.go:172] (0xc0037662c0) Data frame received for 1 I0517 00:03:50.799166 7 log.go:172] (0xc001610640) (1) Data frame handling I0517 00:03:50.799172 7 log.go:172] (0xc001610640) (1) Data frame sent I0517 00:03:50.799180 7 log.go:172] (0xc0037662c0) (0xc001610640) Stream removed, broadcasting: 1 I0517 00:03:50.799212 7 log.go:172] (0xc0037662c0) Go away received I0517 00:03:50.799261 7 log.go:172] (0xc0037662c0) (0xc001610640) Stream removed, broadcasting: 1 I0517 00:03:50.799274 7 log.go:172] (0xc0037662c0) (0xc002126b40) Stream removed, broadcasting: 3 I0517 00:03:50.799284 7 log.go:172] (0xc0037662c0) (0xc002126be0) Stream removed, broadcasting: 5 May 17 00:03:50.799: INFO: Exec stderr: "" May 17 00:03:50.799: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 17 00:04:18.808: INFO: Container has restart count: 2 May 17 00:05:20.807: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 17 00:05:20.810: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-4898 PodName:var-expansion-1145d258-edf5-40bd-95ee-b85df6c916d1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:05:20.810: INFO: >>> kubeConfig: /root/.kube/config I0517 00:05:20.846938 7 log.go:172] (0xc001ece0b0) (0xc001cc1ae0) Create stream I0517 00:05:20.846980 7 log.go:172] (0xc001ece0b0) (0xc001cc1ae0) Stream added, broadcasting: 1 I0517 00:05:20.849081 7 log.go:172] (0xc001ece0b0) Reply frame received for 1 I0517 00:05:20.849302 7 log.go:172] (0xc001ece0b0) (0xc001cc1cc0) Create stream I0517 00:05:20.849331 7 log.go:172] (0xc001ece0b0) (0xc001cc1cc0) Stream added, broadcasting: 3 I0517 00:05:20.850170 7 log.go:172] (0xc001ece0b0) Reply frame received for 3 I0517 00:05:20.850206 7 log.go:172] (0xc001ece0b0) (0xc002048000) Create stream I0517 00:05:20.850219 7 log.go:172] (0xc001ece0b0) (0xc002048000) Stream added, broadcasting: 5 I0517 00:05:20.850989 7 log.go:172] (0xc001ece0b0) Reply frame received for 5 I0517 00:05:20.925916 7 log.go:172] (0xc001ece0b0) Data frame received for 5 I0517 00:05:20.925969 7 log.go:172] (0xc002048000) (5) Data frame handling I0517 00:05:20.926005 7 log.go:172] (0xc001ece0b0) Data frame received for 3 I0517 00:05:20.926037 7 log.go:172] (0xc001cc1cc0) (3) Data frame handling I0517 00:05:20.927088 7 log.go:172] (0xc001ece0b0) Data frame received for 1 I0517 00:05:20.927113 7 log.go:172] (0xc001cc1ae0) (1) Data frame handling I0517 00:05:20.927133 7 log.go:172] (0xc001cc1ae0) (1) Data frame sent I0517 00:05:20.927159 7 log.go:172] (0xc001ece0b0) (0xc001cc1ae0) Stream removed, broadcasting: 1 I0517 00:05:20.927311 7 log.go:172] (0xc001ece0b0) (0xc001cc1ae0) Stream removed, broadcasting: 1 I0517 00:05:20.927402 7 log.go:172] (0xc001ece0b0) (0xc001cc1cc0) Stream removed, broadcasting: 3 I0517 00:05:20.927423 7 log.go:172] (0xc001ece0b0) (0xc002048000) Stream removed, broadcasting: 5 I0517 00:05:20.927481 7 log.go:172] (0xc001ece0b0) Go away received May 17 00:05:20.931: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-4898 PodName:var-expansion-1145d258-edf5-40bd-95ee-b85df6c916d1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:05:20.931: INFO: >>> kubeConfig: /root/.kube/config I0517 00:05:21.007644 7 log.go:172] (0xc0024f22c0) (0xc0016110e0) Create stream I0517 00:05:21.007685 7 log.go:172] (0xc0024f22c0) (0xc0016110e0) Stream added, broadcasting: 1 I0517 00:05:21.009639 7 log.go:172] (0xc0024f22c0) Reply frame received for 1 I0517 00:05:21.009693 7 log.go:172] (0xc0024f22c0) (0xc002126dc0) Create stream I0517 00:05:21.009709 7 log.go:172] (0xc0024f22c0) (0xc002126dc0) Stream added, broadcasting: 3 I0517 00:05:21.010584 7 log.go:172] (0xc0024f22c0) Reply frame received for 3 I0517 00:05:21.010625 7 log.go:172] (0xc0024f22c0) (0xc002126e60) Create stream I0517 00:05:21.010640 7 log.go:172] (0xc0024f22c0) (0xc002126e60) Stream added, broadcasting: 5 I0517 00:05:21.011461 7 log.go:172] (0xc0024f22c0) Reply frame received for 5 I0517 00:05:21.069472 7 log.go:172] (0xc0024f22c0) Data frame received for 5 I0517 00:05:21.069506 7 log.go:172] (0xc002126e60) (5) Data frame handling I0517 00:05:21.069530 7 log.go:172] (0xc0024f22c0) Data frame received for 3 I0517 00:05:21.069544 7 log.go:172] (0xc002126dc0) (3) Data frame handling I0517 00:05:21.070559 7 log.go:172] (0xc0024f22c0) Data frame received for 1 I0517 00:05:21.070575 7 log.go:172] (0xc0016110e0) (1) Data frame handling I0517 00:05:21.070598 7 log.go:172] (0xc0016110e0) (1) Data frame sent I0517 00:05:21.070614 7 log.go:172] (0xc0024f22c0) (0xc0016110e0) Stream removed, broadcasting: 1 I0517 00:05:21.070645 7 log.go:172] (0xc0024f22c0) Go away received I0517 00:05:21.070848 7 log.go:172] (0xc0024f22c0) (0xc0016110e0) Stream removed, broadcasting: 1 I0517 00:05:21.070878 7 log.go:172] (0xc0024f22c0) (0xc002126dc0) Stream removed, broadcasting: 3 I0517 00:05:21.070912 7 log.go:172] (0xc0024f22c0) (0xc002126e60) Stream removed, broadcasting: 5 May 17 00:05:21.070: INFO: Deleting pod "var-expansion-1145d258-edf5-40bd-95ee-b85df6c916d1" in namespace "var-expansion-4898" May 17 00:05:21.076: INFO: Wait up to 5m0s for pod "var-expansion-1145d258-edf5-40bd-95ee-b85df6c916d1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:05:55.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4898" for this suite. • [SLOW TEST:187.881 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":70,"skipped":1224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:05:55.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-135 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-135;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-135 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-135;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-135.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-135.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-135.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-135.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-135.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-135.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-135.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-135.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-135.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-135.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-135.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-135.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 181.127.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.127.181_udp@PTR;check="$$(dig +tcp +noall +answer +search 181.127.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.127.181_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-135 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-135;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-135 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-135;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-135.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-135.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-135.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-135.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-135.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-135.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-135.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-135.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-135.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-135.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-135.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-135.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-135.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 181.127.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.127.181_udp@PTR;check="$$(dig +tcp +noall +answer +search 181.127.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.127.181_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 00:06:03.412: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.415: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.419: INFO: Unable to read wheezy_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.422: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.425: INFO: Unable to read wheezy_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.427: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.430: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.432: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.452: INFO: Unable to read jessie_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.456: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.459: INFO: Unable to read jessie_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.487: INFO: Unable to read jessie_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.503: INFO: Unable to read jessie_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.507: INFO: Unable to read jessie_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.510: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.514: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:03.531: INFO: Lookups using dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-135 wheezy_tcp@dns-test-service.dns-135 wheezy_udp@dns-test-service.dns-135.svc wheezy_tcp@dns-test-service.dns-135.svc wheezy_udp@_http._tcp.dns-test-service.dns-135.svc wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-135 jessie_tcp@dns-test-service.dns-135 jessie_udp@dns-test-service.dns-135.svc jessie_tcp@dns-test-service.dns-135.svc jessie_udp@_http._tcp.dns-test-service.dns-135.svc jessie_tcp@_http._tcp.dns-test-service.dns-135.svc] May 17 00:06:08.537: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.541: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.545: INFO: Unable to read wheezy_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.548: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.551: INFO: Unable to read wheezy_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.555: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.558: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.563: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.587: INFO: Unable to read jessie_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.592: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.595: INFO: Unable to read jessie_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.599: INFO: Unable to read jessie_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.601: INFO: Unable to read jessie_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.607: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:08.627: INFO: Lookups using dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-135 wheezy_tcp@dns-test-service.dns-135 wheezy_udp@dns-test-service.dns-135.svc wheezy_tcp@dns-test-service.dns-135.svc wheezy_udp@_http._tcp.dns-test-service.dns-135.svc wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-135 jessie_tcp@dns-test-service.dns-135 jessie_udp@dns-test-service.dns-135.svc jessie_tcp@dns-test-service.dns-135.svc jessie_udp@_http._tcp.dns-test-service.dns-135.svc jessie_tcp@_http._tcp.dns-test-service.dns-135.svc] May 17 00:06:13.553: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.555: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.558: INFO: Unable to read wheezy_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.561: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.564: INFO: Unable to read wheezy_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.569: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.572: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.595: INFO: Unable to read jessie_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.598: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.600: INFO: Unable to read jessie_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.606: INFO: Unable to read jessie_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.609: INFO: Unable to read jessie_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.615: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:13.637: INFO: Lookups using dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-135 wheezy_tcp@dns-test-service.dns-135 wheezy_udp@dns-test-service.dns-135.svc wheezy_tcp@dns-test-service.dns-135.svc wheezy_udp@_http._tcp.dns-test-service.dns-135.svc wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-135 jessie_tcp@dns-test-service.dns-135 jessie_udp@dns-test-service.dns-135.svc jessie_tcp@dns-test-service.dns-135.svc jessie_udp@_http._tcp.dns-test-service.dns-135.svc jessie_tcp@_http._tcp.dns-test-service.dns-135.svc] May 17 00:06:18.536: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.540: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.544: INFO: Unable to read wheezy_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.547: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.550: INFO: Unable to read wheezy_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.553: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.556: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.559: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.579: INFO: Unable to read jessie_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.582: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.585: INFO: Unable to read jessie_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.589: INFO: Unable to read jessie_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.592: INFO: Unable to read jessie_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.598: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.601: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:18.667: INFO: Lookups using dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-135 wheezy_tcp@dns-test-service.dns-135 wheezy_udp@dns-test-service.dns-135.svc wheezy_tcp@dns-test-service.dns-135.svc wheezy_udp@_http._tcp.dns-test-service.dns-135.svc wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-135 jessie_tcp@dns-test-service.dns-135 jessie_udp@dns-test-service.dns-135.svc jessie_tcp@dns-test-service.dns-135.svc jessie_udp@_http._tcp.dns-test-service.dns-135.svc jessie_tcp@_http._tcp.dns-test-service.dns-135.svc] May 17 00:06:23.536: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.539: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.545: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.548: INFO: Unable to read wheezy_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.552: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.555: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.557: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.576: INFO: Unable to read jessie_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.579: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.582: INFO: Unable to read jessie_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.585: INFO: Unable to read jessie_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.612: INFO: Unable to read jessie_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.616: INFO: Unable to read jessie_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.620: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.623: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:23.643: INFO: Lookups using dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-135 wheezy_tcp@dns-test-service.dns-135 wheezy_udp@dns-test-service.dns-135.svc wheezy_tcp@dns-test-service.dns-135.svc wheezy_udp@_http._tcp.dns-test-service.dns-135.svc wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-135 jessie_tcp@dns-test-service.dns-135 jessie_udp@dns-test-service.dns-135.svc jessie_tcp@dns-test-service.dns-135.svc jessie_udp@_http._tcp.dns-test-service.dns-135.svc jessie_tcp@_http._tcp.dns-test-service.dns-135.svc] May 17 00:06:28.537: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.540: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.550: INFO: Unable to read wheezy_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.553: INFO: Unable to read wheezy_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.555: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.558: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.578: INFO: Unable to read jessie_udp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.580: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.583: INFO: Unable to read jessie_udp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.586: INFO: Unable to read jessie_tcp@dns-test-service.dns-135 from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.589: INFO: Unable to read jessie_udp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.594: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.596: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-135.svc from pod dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995: the server could not find the requested resource (get pods dns-test-34a43ad3-5486-446b-8806-1b327552b995) May 17 00:06:28.613: INFO: Lookups using dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-135 wheezy_tcp@dns-test-service.dns-135 wheezy_udp@dns-test-service.dns-135.svc wheezy_tcp@dns-test-service.dns-135.svc wheezy_udp@_http._tcp.dns-test-service.dns-135.svc wheezy_tcp@_http._tcp.dns-test-service.dns-135.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-135 jessie_tcp@dns-test-service.dns-135 jessie_udp@dns-test-service.dns-135.svc jessie_tcp@dns-test-service.dns-135.svc jessie_udp@_http._tcp.dns-test-service.dns-135.svc jessie_tcp@_http._tcp.dns-test-service.dns-135.svc] May 17 00:06:33.649: INFO: DNS probes using dns-135/dns-test-34a43ad3-5486-446b-8806-1b327552b995 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:06:34.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-135" for this suite. • [SLOW TEST:39.239 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":71,"skipped":1255,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:06:34.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:06:34.599: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 17 00:06:36.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5207 create -f -' May 17 00:06:40.362: INFO: stderr: "" May 17 00:06:40.362: INFO: stdout: "e2e-test-crd-publish-openapi-978-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 17 00:06:40.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5207 delete e2e-test-crd-publish-openapi-978-crds test-foo' May 17 00:06:40.473: INFO: stderr: "" May 17 00:06:40.473: INFO: stdout: "e2e-test-crd-publish-openapi-978-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 17 00:06:40.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5207 apply -f -' May 17 00:06:40.737: INFO: stderr: "" May 17 00:06:40.737: INFO: stdout: "e2e-test-crd-publish-openapi-978-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 17 00:06:40.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5207 delete e2e-test-crd-publish-openapi-978-crds test-foo' May 17 00:06:40.851: INFO: stderr: "" May 17 00:06:40.851: INFO: stdout: "e2e-test-crd-publish-openapi-978-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 17 00:06:40.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5207 create -f -' May 17 00:06:41.102: INFO: rc: 1 May 17 00:06:41.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5207 apply -f -' May 17 00:06:41.345: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 17 00:06:41.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5207 create -f -' May 17 00:06:41.596: INFO: rc: 1 May 17 00:06:41.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5207 apply -f -' May 17 00:06:41.859: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 17 00:06:41.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-978-crds' May 17 00:06:42.118: INFO: stderr: "" May 17 00:06:42.118: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-978-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 17 00:06:42.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-978-crds.metadata' May 17 00:06:42.374: INFO: stderr: "" May 17 00:06:42.374: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-978-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 17 00:06:42.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-978-crds.spec' May 17 00:06:42.636: INFO: stderr: "" May 17 00:06:42.636: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-978-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 17 00:06:42.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-978-crds.spec.bars' May 17 00:06:42.857: INFO: stderr: "" May 17 00:06:42.857: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-978-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 17 00:06:42.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-978-crds.spec.bars2' May 17 00:06:43.107: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:06:45.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5207" for this suite. • [SLOW TEST:10.622 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":72,"skipped":1261,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:06:45.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-caae70d9-0983-4fc1-88b5-e04357417d01 STEP: Creating a pod to test consume secrets May 17 00:06:45.093: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-22e161ef-8ffb-424f-b9c9-6c721553d543" in namespace "projected-9861" to be "Succeeded or Failed" May 17 00:06:45.112: INFO: Pod "pod-projected-secrets-22e161ef-8ffb-424f-b9c9-6c721553d543": Phase="Pending", Reason="", readiness=false. Elapsed: 18.670113ms May 17 00:06:47.129: INFO: Pod "pod-projected-secrets-22e161ef-8ffb-424f-b9c9-6c721553d543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035800416s May 17 00:06:49.134: INFO: Pod "pod-projected-secrets-22e161ef-8ffb-424f-b9c9-6c721553d543": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041035156s STEP: Saw pod success May 17 00:06:49.134: INFO: Pod "pod-projected-secrets-22e161ef-8ffb-424f-b9c9-6c721553d543" satisfied condition "Succeeded or Failed" May 17 00:06:49.139: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-22e161ef-8ffb-424f-b9c9-6c721553d543 container projected-secret-volume-test: STEP: delete the pod May 17 00:06:49.454: INFO: Waiting for pod pod-projected-secrets-22e161ef-8ffb-424f-b9c9-6c721553d543 to disappear May 17 00:06:49.458: INFO: Pod pod-projected-secrets-22e161ef-8ffb-424f-b9c9-6c721553d543 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:06:49.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9861" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":73,"skipped":1264,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:06:49.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:06:53.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4617" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":74,"skipped":1275,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:06:53.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:06:53.993: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c95a7a93-fbb5-468d-8393-ae6ef4656865" in namespace "security-context-test-6291" to be "Succeeded or Failed" May 17 00:06:53.996: INFO: Pod "busybox-privileged-false-c95a7a93-fbb5-468d-8393-ae6ef4656865": Phase="Pending", Reason="", readiness=false. Elapsed: 3.243653ms May 17 00:06:56.008: INFO: Pod "busybox-privileged-false-c95a7a93-fbb5-468d-8393-ae6ef4656865": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015069555s May 17 00:06:58.012: INFO: Pod "busybox-privileged-false-c95a7a93-fbb5-468d-8393-ae6ef4656865": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019735266s May 17 00:06:58.012: INFO: Pod "busybox-privileged-false-c95a7a93-fbb5-468d-8393-ae6ef4656865" satisfied condition "Succeeded or Failed" May 17 00:06:58.034: INFO: Got logs for pod "busybox-privileged-false-c95a7a93-fbb5-468d-8393-ae6ef4656865": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:06:58.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6291" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":75,"skipped":1285,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:06:58.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0517 00:07:11.061705 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 00:07:11.061: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:07:11.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4651" for this suite. • [SLOW TEST:13.017 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":76,"skipped":1305,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:07:11.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:09:11.771: INFO: Deleting pod "var-expansion-1ec40043-0651-4a7d-9913-009f192719c9" in namespace "var-expansion-4179" May 17 00:09:11.776: INFO: Wait up to 5m0s for pod "var-expansion-1ec40043-0651-4a7d-9913-009f192719c9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:09:15.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4179" for this suite. • [SLOW TEST:124.715 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":77,"skipped":1314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:09:15.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 17 00:09:15.913: INFO: PodSpec: initContainers in spec.initContainers May 17 00:10:01.507: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-55fc3009-95d4-45c0-8d51-2d89b6d8bed9", GenerateName:"", Namespace:"init-container-4327", SelfLink:"/api/v1/namespaces/init-container-4327/pods/pod-init-55fc3009-95d4-45c0-8d51-2d89b6d8bed9", UID:"037f4252-5349-44c2-9229-a19feaa56d61", ResourceVersion:"5280956", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725270955, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"913503270"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001a0bc80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001a0bca0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001a0bcc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001a0bce0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2qq2b", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004325c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2qq2b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2qq2b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2qq2b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00084ef28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c890a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00084f090)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00084f0c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00084f0c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00084f0cc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270956, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270956, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270956, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725270955, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.2.161", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.161"}}, StartTime:(*v1.Time)(0xc001a0bd00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001a0bd40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c89180)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b9ebb1620e3823542f2acf66762a31ddee5cb68c28945bf73f8369c614a4311e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a0bd60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a0bd20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00084f14f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:10:01.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4327" for this suite. • [SLOW TEST:45.703 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":78,"skipped":1347,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:10:01.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-1026e0e6-27d6-4ea1-9730-ace60db1d551 STEP: Creating a pod to test consume configMaps May 17 00:10:01.938: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872" in namespace "projected-6050" to be "Succeeded or Failed" May 17 00:10:01.941: INFO: Pod "pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872": Phase="Pending", Reason="", readiness=false. Elapsed: 3.125094ms May 17 00:10:03.991: INFO: Pod "pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052736308s May 17 00:10:05.995: INFO: Pod "pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872": Phase="Running", Reason="", readiness=true. Elapsed: 4.056775498s May 17 00:10:07.999: INFO: Pod "pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060922399s STEP: Saw pod success May 17 00:10:07.999: INFO: Pod "pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872" satisfied condition "Succeeded or Failed" May 17 00:10:08.002: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872 container projected-configmap-volume-test: STEP: delete the pod May 17 00:10:08.071: INFO: Waiting for pod pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872 to disappear May 17 00:10:08.105: INFO: Pod pod-projected-configmaps-11f321b7-3616-40bf-92f5-c397d3042872 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:10:08.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6050" for this suite. • [SLOW TEST:6.598 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1351,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:10:08.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0517 00:10:09.291631 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 00:10:09.291: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:10:09.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6292" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":80,"skipped":1359,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:10:09.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:10:13.652: INFO: Waiting up to 5m0s for pod "client-envvars-82454b3d-fbe6-4d75-ad4b-be8cf913e541" in namespace "pods-683" to be "Succeeded or Failed" May 17 00:10:13.698: INFO: Pod "client-envvars-82454b3d-fbe6-4d75-ad4b-be8cf913e541": Phase="Pending", Reason="", readiness=false. Elapsed: 46.076805ms May 17 00:10:15.702: INFO: Pod "client-envvars-82454b3d-fbe6-4d75-ad4b-be8cf913e541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050338829s May 17 00:10:17.706: INFO: Pod "client-envvars-82454b3d-fbe6-4d75-ad4b-be8cf913e541": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05465817s STEP: Saw pod success May 17 00:10:17.707: INFO: Pod "client-envvars-82454b3d-fbe6-4d75-ad4b-be8cf913e541" satisfied condition "Succeeded or Failed" May 17 00:10:17.710: INFO: Trying to get logs from node latest-worker2 pod client-envvars-82454b3d-fbe6-4d75-ad4b-be8cf913e541 container env3cont: STEP: delete the pod May 17 00:10:17.974: INFO: Waiting for pod client-envvars-82454b3d-fbe6-4d75-ad4b-be8cf913e541 to disappear May 17 00:10:18.014: INFO: Pod client-envvars-82454b3d-fbe6-4d75-ad4b-be8cf913e541 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:10:18.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-683" for this suite. • [SLOW TEST:8.725 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":81,"skipped":1378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:10:18.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:10:18.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 17 00:10:18.698: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-17T00:10:18Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-17T00:10:18Z]] name:name1 resourceVersion:5281123 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b370a40f-a43b-4097-95c8-afe310818c30] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 17 00:10:28.705: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-17T00:10:28Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-17T00:10:28Z]] name:name2 resourceVersion:5281177 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cdf88367-6279-4985-aaef-bed32f612a4d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 17 00:10:38.711: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-17T00:10:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-17T00:10:38Z]] name:name1 resourceVersion:5281207 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b370a40f-a43b-4097-95c8-afe310818c30] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 17 00:10:48.719: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-17T00:10:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-17T00:10:48Z]] name:name2 resourceVersion:5281237 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cdf88367-6279-4985-aaef-bed32f612a4d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 17 00:10:58.730: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-17T00:10:18Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-17T00:10:38Z]] name:name1 resourceVersion:5281267 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b370a40f-a43b-4097-95c8-afe310818c30] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 17 00:11:08.740: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-17T00:10:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-17T00:10:48Z]] name:name2 resourceVersion:5281297 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cdf88367-6279-4985-aaef-bed32f612a4d] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:11:19.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3843" for this suite. • [SLOW TEST:61.238 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":82,"skipped":1402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:11:19.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:11:19.390: INFO: Create a RollingUpdate DaemonSet May 17 00:11:19.394: INFO: Check that daemon pods launch on every node of the cluster May 17 00:11:19.407: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:19.425: INFO: Number of nodes with available pods: 0 May 17 00:11:19.425: INFO: Node latest-worker is running more than one daemon pod May 17 00:11:20.430: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:20.433: INFO: Number of nodes with available pods: 0 May 17 00:11:20.433: INFO: Node latest-worker is running more than one daemon pod May 17 00:11:21.431: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:21.435: INFO: Number of nodes with available pods: 0 May 17 00:11:21.435: INFO: Node latest-worker is running more than one daemon pod May 17 00:11:22.598: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:22.602: INFO: Number of nodes with available pods: 0 May 17 00:11:22.602: INFO: Node latest-worker is running more than one daemon pod May 17 00:11:23.430: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:23.433: INFO: Number of nodes with available pods: 0 May 17 00:11:23.433: INFO: Node latest-worker is running more than one daemon pod May 17 00:11:24.429: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:24.450: INFO: Number of nodes with available pods: 2 May 17 00:11:24.450: INFO: Number of running nodes: 2, number of available pods: 2 May 17 00:11:24.450: INFO: Update the DaemonSet to trigger a rollout May 17 00:11:24.456: INFO: Updating DaemonSet daemon-set May 17 00:11:35.493: INFO: Roll back the DaemonSet before rollout is complete May 17 00:11:35.501: INFO: Updating DaemonSet daemon-set May 17 00:11:35.501: INFO: Make sure DaemonSet rollback is complete May 17 00:11:35.510: INFO: Wrong image for pod: daemon-set-mxhqc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 17 00:11:35.510: INFO: Pod daemon-set-mxhqc is not available May 17 00:11:35.558: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:36.563: INFO: Wrong image for pod: daemon-set-mxhqc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 17 00:11:36.563: INFO: Pod daemon-set-mxhqc is not available May 17 00:11:36.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:37.562: INFO: Wrong image for pod: daemon-set-mxhqc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 17 00:11:37.562: INFO: Pod daemon-set-mxhqc is not available May 17 00:11:37.565: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:11:38.563: INFO: Pod daemon-set-s9zk5 is not available May 17 00:11:38.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8342, will wait for the garbage collector to delete the pods May 17 00:11:38.631: INFO: Deleting DaemonSet.extensions daemon-set took: 5.67166ms May 17 00:11:38.932: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27407ms May 17 00:11:44.962: INFO: Number of nodes with available pods: 0 May 17 00:11:44.962: INFO: Number of running nodes: 0, number of available pods: 0 May 17 00:11:44.964: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8342/daemonsets","resourceVersion":"5281484"},"items":null} May 17 00:11:44.967: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8342/pods","resourceVersion":"5281484"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:11:44.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8342" for this suite. • [SLOW TEST:25.717 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":83,"skipped":1437,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:11:44.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:11:46.187: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:11:48.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271106, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271106, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271106, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271106, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 00:11:50.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271106, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271106, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271106, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271106, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:11:53.274: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:11:53.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3234" for this suite. STEP: Destroying namespace "webhook-3234-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.558 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":84,"skipped":1459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:11:53.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 17 00:11:53.758: INFO: Waiting up to 5m0s for pod "pod-511ae122-d134-48e0-b675-b32c6ca52a99" in namespace "emptydir-9280" to be "Succeeded or Failed" May 17 00:11:53.786: INFO: Pod "pod-511ae122-d134-48e0-b675-b32c6ca52a99": Phase="Pending", Reason="", readiness=false. Elapsed: 28.016108ms May 17 00:11:55.849: INFO: Pod "pod-511ae122-d134-48e0-b675-b32c6ca52a99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091141117s May 17 00:11:57.853: INFO: Pod "pod-511ae122-d134-48e0-b675-b32c6ca52a99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094919776s STEP: Saw pod success May 17 00:11:57.853: INFO: Pod "pod-511ae122-d134-48e0-b675-b32c6ca52a99" satisfied condition "Succeeded or Failed" May 17 00:11:57.856: INFO: Trying to get logs from node latest-worker pod pod-511ae122-d134-48e0-b675-b32c6ca52a99 container test-container: STEP: delete the pod May 17 00:11:58.082: INFO: Waiting for pod pod-511ae122-d134-48e0-b675-b32c6ca52a99 to disappear May 17 00:11:58.100: INFO: Pod pod-511ae122-d134-48e0-b675-b32c6ca52a99 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:11:58.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9280" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1490,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:11:58.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:12:13.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5090" for this suite. STEP: Destroying namespace "nsdeletetest-419" for this suite. May 17 00:12:13.504: INFO: Namespace nsdeletetest-419 was already deleted STEP: Destroying namespace "nsdeletetest-6669" for this suite. • [SLOW TEST:15.366 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":86,"skipped":1490,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:12:13.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:12:14.026: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:12:16.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271134, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271134, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271134, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271134, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:12:19.106: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:12:19.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9572" for this suite. STEP: Destroying namespace "webhook-9572-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.333 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":87,"skipped":1497,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:12:19.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 17 00:12:19.945: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 17 00:12:19.954: INFO: Waiting for terminating namespaces to be deleted... May 17 00:12:19.956: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 17 00:12:19.960: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 17 00:12:19.960: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 17 00:12:19.960: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 17 00:12:19.960: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 17 00:12:19.960: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 17 00:12:19.960: INFO: Container kindnet-cni ready: true, restart count 0 May 17 00:12:19.960: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 17 00:12:19.960: INFO: Container kube-proxy ready: true, restart count 0 May 17 00:12:19.960: INFO: sample-webhook-deployment-75dd644756-nxfd6 from webhook-9572 started at 2020-05-17 00:12:14 +0000 UTC (1 container statuses recorded) May 17 00:12:19.960: INFO: Container sample-webhook ready: true, restart count 0 May 17 00:12:19.960: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 17 00:12:19.964: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 17 00:12:19.964: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 17 00:12:19.964: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 17 00:12:19.964: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 17 00:12:19.964: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 17 00:12:19.964: INFO: Container kindnet-cni ready: true, restart count 0 May 17 00:12:19.964: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 17 00:12:19.964: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9fc65446-8b3a-4bd6-b098-1701a62e93b0 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-9fc65446-8b3a-4bd6-b098-1701a62e93b0 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-9fc65446-8b3a-4bd6-b098-1701a62e93b0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:17:28.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9791" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.328 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":88,"skipped":1497,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:17:28.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 17 00:17:28.267: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 17 00:17:28.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3580' May 17 00:17:31.554: INFO: stderr: "" May 17 00:17:31.554: INFO: stdout: "service/agnhost-slave created\n" May 17 00:17:31.554: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 17 00:17:31.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3580' May 17 00:17:31.831: INFO: stderr: "" May 17 00:17:31.831: INFO: stdout: "service/agnhost-master created\n" May 17 00:17:31.832: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 17 00:17:31.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3580' May 17 00:17:32.168: INFO: stderr: "" May 17 00:17:32.168: INFO: stdout: "service/frontend created\n" May 17 00:17:32.168: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 17 00:17:32.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3580' May 17 00:17:32.433: INFO: stderr: "" May 17 00:17:32.433: INFO: stdout: "deployment.apps/frontend created\n" May 17 00:17:32.433: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 17 00:17:32.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3580' May 17 00:17:32.740: INFO: stderr: "" May 17 00:17:32.740: INFO: stdout: "deployment.apps/agnhost-master created\n" May 17 00:17:32.740: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 17 00:17:32.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3580' May 17 00:17:33.029: INFO: stderr: "" May 17 00:17:33.029: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 17 00:17:33.029: INFO: Waiting for all frontend pods to be Running. May 17 00:17:43.080: INFO: Waiting for frontend to serve content. May 17 00:17:43.091: INFO: Trying to add a new entry to the guestbook. May 17 00:17:43.102: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 17 00:17:43.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3580' May 17 00:17:43.300: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 00:17:43.300: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 17 00:17:43.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3580' May 17 00:17:43.451: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 00:17:43.451: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 17 00:17:43.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3580' May 17 00:17:43.606: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 00:17:43.606: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 17 00:17:43.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3580' May 17 00:17:43.716: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 00:17:43.716: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 17 00:17:43.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3580' May 17 00:17:43.823: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 00:17:43.824: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 17 00:17:43.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3580' May 17 00:17:44.215: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 00:17:44.215: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:17:44.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3580" for this suite. • [SLOW TEST:16.116 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":89,"skipped":1518,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:17:44.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:17:44.938: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:17:45.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8879" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":90,"skipped":1530,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:17:46.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-650 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-650 to expose endpoints map[] May 17 00:17:47.149: INFO: successfully validated that service multi-endpoint-test in namespace services-650 exposes endpoints map[] (21.408449ms elapsed) STEP: Creating pod pod1 in namespace services-650 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-650 to expose endpoints map[pod1:[100]] May 17 00:17:51.306: INFO: successfully validated that service multi-endpoint-test in namespace services-650 exposes endpoints map[pod1:[100]] (4.140879029s elapsed) STEP: Creating pod pod2 in namespace services-650 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-650 to expose endpoints map[pod1:[100] pod2:[101]] May 17 00:17:54.494: INFO: successfully validated that service multi-endpoint-test in namespace services-650 exposes endpoints map[pod1:[100] pod2:[101]] (3.176521869s elapsed) STEP: Deleting pod pod1 in namespace services-650 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-650 to expose endpoints map[pod2:[101]] May 17 00:17:54.545: INFO: successfully validated that service multi-endpoint-test in namespace services-650 exposes endpoints map[pod2:[101]] (47.700902ms elapsed) STEP: Deleting pod pod2 in namespace services-650 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-650 to expose endpoints map[] May 17 00:17:55.574: INFO: successfully validated that service multi-endpoint-test in namespace services-650 exposes endpoints map[] (1.023283335s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:17:55.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-650" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.231 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":91,"skipped":1550,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:17:55.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 17 00:17:55.756: INFO: Waiting up to 5m0s for pod "var-expansion-f3ecd37e-4d62-45ac-8b0d-6cb31c29bf49" in namespace "var-expansion-5589" to be "Succeeded or Failed" May 17 00:17:55.809: INFO: Pod "var-expansion-f3ecd37e-4d62-45ac-8b0d-6cb31c29bf49": Phase="Pending", Reason="", readiness=false. Elapsed: 53.184824ms May 17 00:17:57.918: INFO: Pod "var-expansion-f3ecd37e-4d62-45ac-8b0d-6cb31c29bf49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161877372s May 17 00:17:59.921: INFO: Pod "var-expansion-f3ecd37e-4d62-45ac-8b0d-6cb31c29bf49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165618549s STEP: Saw pod success May 17 00:17:59.922: INFO: Pod "var-expansion-f3ecd37e-4d62-45ac-8b0d-6cb31c29bf49" satisfied condition "Succeeded or Failed" May 17 00:17:59.924: INFO: Trying to get logs from node latest-worker2 pod var-expansion-f3ecd37e-4d62-45ac-8b0d-6cb31c29bf49 container dapi-container: STEP: delete the pod May 17 00:17:59.995: INFO: Waiting for pod var-expansion-f3ecd37e-4d62-45ac-8b0d-6cb31c29bf49 to disappear May 17 00:18:00.006: INFO: Pod var-expansion-f3ecd37e-4d62-45ac-8b0d-6cb31c29bf49 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:18:00.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5589" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:18:00.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:18:11.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9736" for this suite. • [SLOW TEST:11.154 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":93,"skipped":1609,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:18:11.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 17 00:20:11.826: INFO: Successfully updated pod "var-expansion-206b34c0-5836-4d55-8f7e-d7eee82717d8" STEP: waiting for pod running STEP: deleting the pod gracefully May 17 00:20:13.854: INFO: Deleting pod "var-expansion-206b34c0-5836-4d55-8f7e-d7eee82717d8" in namespace "var-expansion-4513" May 17 00:20:13.859: INFO: Wait up to 5m0s for pod "var-expansion-206b34c0-5836-4d55-8f7e-d7eee82717d8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:20:47.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4513" for this suite. • [SLOW TEST:156.684 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":94,"skipped":1622,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:20:47.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 17 00:20:47.988: INFO: Waiting up to 5m0s for pod "pod-6b13c9be-1feb-4344-a823-2a2948b99955" in namespace "emptydir-9527" to be "Succeeded or Failed" May 17 00:20:47.991: INFO: Pod "pod-6b13c9be-1feb-4344-a823-2a2948b99955": Phase="Pending", Reason="", readiness=false. Elapsed: 3.790626ms May 17 00:20:49.996: INFO: Pod "pod-6b13c9be-1feb-4344-a823-2a2948b99955": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008030493s May 17 00:20:52.000: INFO: Pod "pod-6b13c9be-1feb-4344-a823-2a2948b99955": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012690001s STEP: Saw pod success May 17 00:20:52.000: INFO: Pod "pod-6b13c9be-1feb-4344-a823-2a2948b99955" satisfied condition "Succeeded or Failed" May 17 00:20:52.004: INFO: Trying to get logs from node latest-worker2 pod pod-6b13c9be-1feb-4344-a823-2a2948b99955 container test-container: STEP: delete the pod May 17 00:20:52.166: INFO: Waiting for pod pod-6b13c9be-1feb-4344-a823-2a2948b99955 to disappear May 17 00:20:52.177: INFO: Pod pod-6b13c9be-1feb-4344-a823-2a2948b99955 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:20:52.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9527" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":95,"skipped":1629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:20:52.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 17 00:20:56.326: INFO: &Pod{ObjectMeta:{send-events-8934a21a-c978-4083-a40d-176f1131574c events-6524 /api/v1/namespaces/events-6524/pods/send-events-8934a21a-c978-4083-a40d-176f1131574c 3ccce5eb-7524-4929-8986-ea9160934c78 5283780 0 2020-05-17 00:20:52 +0000 UTC map[name:foo time:267051649] map[] [] [] [{e2e.test Update v1 2020-05-17 00:20:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:20:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.174\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4rfm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4rfm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4rfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:20:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:20:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:20:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:20:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.174,StartTime:2020-05-17 00:20:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:20:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://19cc0c5b2ea7f432d2d39ac536e59414547eefae647c8dfca46be5930af62777,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 17 00:20:58.331: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 17 00:21:00.335: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:21:00.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6524" for this suite. • [SLOW TEST:8.166 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":96,"skipped":1670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:21:00.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:21:00.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-151" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":97,"skipped":1714,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:21:00.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1314 STEP: creating service affinity-nodeport-transition in namespace services-1314 STEP: creating replication controller affinity-nodeport-transition in namespace services-1314 I0517 00:21:00.707018 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1314, replica count: 3 I0517 00:21:03.757524 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:21:06.757715 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 00:21:06.769: INFO: Creating new exec pod May 17 00:21:11.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1314 execpod-affinityngjkf -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 17 00:21:12.115: INFO: stderr: "I0517 00:21:11.981705 2657 log.go:172] (0xc000a1a000) (0xc00016a6e0) Create stream\nI0517 00:21:11.981770 2657 log.go:172] (0xc000a1a000) (0xc00016a6e0) Stream added, broadcasting: 1\nI0517 00:21:11.984743 2657 log.go:172] (0xc000a1a000) Reply frame received for 1\nI0517 00:21:11.984806 2657 log.go:172] (0xc000a1a000) (0xc00035c3c0) Create stream\nI0517 00:21:11.984829 2657 log.go:172] (0xc000a1a000) (0xc00035c3c0) Stream added, broadcasting: 3\nI0517 00:21:11.985953 2657 log.go:172] (0xc000a1a000) Reply frame received for 3\nI0517 00:21:11.985997 2657 log.go:172] (0xc000a1a000) (0xc000662dc0) Create stream\nI0517 00:21:11.986035 2657 log.go:172] (0xc000a1a000) (0xc000662dc0) Stream added, broadcasting: 5\nI0517 00:21:11.987018 2657 log.go:172] (0xc000a1a000) Reply frame received for 5\nI0517 00:21:12.085616 2657 log.go:172] (0xc000a1a000) Data frame received for 5\nI0517 00:21:12.085646 2657 log.go:172] (0xc000662dc0) (5) Data frame handling\nI0517 00:21:12.085669 2657 log.go:172] (0xc000662dc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0517 00:21:12.107301 2657 log.go:172] (0xc000a1a000) Data frame received for 5\nI0517 00:21:12.107339 2657 log.go:172] (0xc000662dc0) (5) Data frame handling\nI0517 00:21:12.107377 2657 log.go:172] (0xc000662dc0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0517 00:21:12.107407 2657 log.go:172] (0xc000a1a000) Data frame received for 3\nI0517 00:21:12.107558 2657 log.go:172] (0xc00035c3c0) (3) Data frame handling\nI0517 00:21:12.107593 2657 log.go:172] (0xc000a1a000) Data frame received for 5\nI0517 00:21:12.107618 2657 log.go:172] (0xc000662dc0) (5) Data frame handling\nI0517 00:21:12.110125 2657 log.go:172] (0xc000a1a000) Data frame received for 1\nI0517 00:21:12.110199 2657 log.go:172] (0xc00016a6e0) (1) Data frame handling\nI0517 00:21:12.110255 2657 log.go:172] (0xc00016a6e0) (1) Data frame sent\nI0517 00:21:12.110308 2657 log.go:172] (0xc000a1a000) (0xc00016a6e0) Stream removed, broadcasting: 1\nI0517 00:21:12.110358 2657 log.go:172] (0xc000a1a000) Go away received\nI0517 00:21:12.110817 2657 log.go:172] (0xc000a1a000) (0xc00016a6e0) Stream removed, broadcasting: 1\nI0517 00:21:12.110857 2657 log.go:172] (0xc000a1a000) (0xc00035c3c0) Stream removed, broadcasting: 3\nI0517 00:21:12.110882 2657 log.go:172] (0xc000a1a000) (0xc000662dc0) Stream removed, broadcasting: 5\n" May 17 00:21:12.115: INFO: stdout: "" May 17 00:21:12.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1314 execpod-affinityngjkf -- /bin/sh -x -c nc -zv -t -w 2 10.109.168.246 80' May 17 00:21:12.352: INFO: stderr: "I0517 00:21:12.263599 2676 log.go:172] (0xc000a274a0) (0xc000ab8320) Create stream\nI0517 00:21:12.263680 2676 log.go:172] (0xc000a274a0) (0xc000ab8320) Stream added, broadcasting: 1\nI0517 00:21:12.266846 2676 log.go:172] (0xc000a274a0) Reply frame received for 1\nI0517 00:21:12.266893 2676 log.go:172] (0xc000a274a0) (0xc000a3a0a0) Create stream\nI0517 00:21:12.266934 2676 log.go:172] (0xc000a274a0) (0xc000a3a0a0) Stream added, broadcasting: 3\nI0517 00:21:12.268145 2676 log.go:172] (0xc000a274a0) Reply frame received for 3\nI0517 00:21:12.268173 2676 log.go:172] (0xc000a274a0) (0xc000ab83c0) Create stream\nI0517 00:21:12.268198 2676 log.go:172] (0xc000a274a0) (0xc000ab83c0) Stream added, broadcasting: 5\nI0517 00:21:12.269322 2676 log.go:172] (0xc000a274a0) Reply frame received for 5\nI0517 00:21:12.347314 2676 log.go:172] (0xc000a274a0) Data frame received for 3\nI0517 00:21:12.347364 2676 log.go:172] (0xc000a3a0a0) (3) Data frame handling\nI0517 00:21:12.347402 2676 log.go:172] (0xc000a274a0) Data frame received for 5\nI0517 00:21:12.347426 2676 log.go:172] (0xc000ab83c0) (5) Data frame handling\nI0517 00:21:12.347458 2676 log.go:172] (0xc000ab83c0) (5) Data frame sent\nI0517 00:21:12.347477 2676 log.go:172] (0xc000a274a0) Data frame received for 5\nI0517 00:21:12.347488 2676 log.go:172] (0xc000ab83c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.168.246 80\nConnection to 10.109.168.246 80 port [tcp/http] succeeded!\nI0517 00:21:12.348470 2676 log.go:172] (0xc000a274a0) Data frame received for 1\nI0517 00:21:12.348488 2676 log.go:172] (0xc000ab8320) (1) Data frame handling\nI0517 00:21:12.348508 2676 log.go:172] (0xc000ab8320) (1) Data frame sent\nI0517 00:21:12.348527 2676 log.go:172] (0xc000a274a0) (0xc000ab8320) Stream removed, broadcasting: 1\nI0517 00:21:12.348588 2676 log.go:172] (0xc000a274a0) Go away received\nI0517 00:21:12.348872 2676 log.go:172] (0xc000a274a0) (0xc000ab8320) Stream removed, broadcasting: 1\nI0517 00:21:12.348888 2676 log.go:172] (0xc000a274a0) (0xc000a3a0a0) Stream removed, broadcasting: 3\nI0517 00:21:12.348895 2676 log.go:172] (0xc000a274a0) (0xc000ab83c0) Stream removed, broadcasting: 5\n" May 17 00:21:12.353: INFO: stdout: "" May 17 00:21:12.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1314 execpod-affinityngjkf -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32459' May 17 00:21:12.566: INFO: stderr: "I0517 00:21:12.494939 2696 log.go:172] (0xc0009f3340) (0xc000b7e460) Create stream\nI0517 00:21:12.495006 2696 log.go:172] (0xc0009f3340) (0xc000b7e460) Stream added, broadcasting: 1\nI0517 00:21:12.500375 2696 log.go:172] (0xc0009f3340) Reply frame received for 1\nI0517 00:21:12.500420 2696 log.go:172] (0xc0009f3340) (0xc000704500) Create stream\nI0517 00:21:12.500430 2696 log.go:172] (0xc0009f3340) (0xc000704500) Stream added, broadcasting: 3\nI0517 00:21:12.501543 2696 log.go:172] (0xc0009f3340) Reply frame received for 3\nI0517 00:21:12.501569 2696 log.go:172] (0xc0009f3340) (0xc000530d20) Create stream\nI0517 00:21:12.501576 2696 log.go:172] (0xc0009f3340) (0xc000530d20) Stream added, broadcasting: 5\nI0517 00:21:12.502620 2696 log.go:172] (0xc0009f3340) Reply frame received for 5\nI0517 00:21:12.558334 2696 log.go:172] (0xc0009f3340) Data frame received for 3\nI0517 00:21:12.558389 2696 log.go:172] (0xc000704500) (3) Data frame handling\nI0517 00:21:12.558425 2696 log.go:172] (0xc0009f3340) Data frame received for 5\nI0517 00:21:12.558442 2696 log.go:172] (0xc000530d20) (5) Data frame handling\nI0517 00:21:12.558461 2696 log.go:172] (0xc000530d20) (5) Data frame sent\nI0517 00:21:12.558481 2696 log.go:172] (0xc0009f3340) Data frame received for 5\nI0517 00:21:12.558508 2696 log.go:172] (0xc000530d20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32459\nConnection to 172.17.0.13 32459 port [tcp/32459] succeeded!\nI0517 00:21:12.560272 2696 log.go:172] (0xc0009f3340) Data frame received for 1\nI0517 00:21:12.560309 2696 log.go:172] (0xc000b7e460) (1) Data frame handling\nI0517 00:21:12.560350 2696 log.go:172] (0xc000b7e460) (1) Data frame sent\nI0517 00:21:12.560377 2696 log.go:172] (0xc0009f3340) (0xc000b7e460) Stream removed, broadcasting: 1\nI0517 00:21:12.560428 2696 log.go:172] (0xc0009f3340) Go away received\nI0517 00:21:12.560805 2696 log.go:172] (0xc0009f3340) (0xc000b7e460) Stream removed, broadcasting: 1\nI0517 00:21:12.560827 2696 log.go:172] (0xc0009f3340) (0xc000704500) Stream removed, broadcasting: 3\nI0517 00:21:12.560836 2696 log.go:172] (0xc0009f3340) (0xc000530d20) Stream removed, broadcasting: 5\n" May 17 00:21:12.566: INFO: stdout: "" May 17 00:21:12.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1314 execpod-affinityngjkf -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32459' May 17 00:21:12.790: INFO: stderr: "I0517 00:21:12.703332 2717 log.go:172] (0xc000ae5130) (0xc000633400) Create stream\nI0517 00:21:12.703405 2717 log.go:172] (0xc000ae5130) (0xc000633400) Stream added, broadcasting: 1\nI0517 00:21:12.708149 2717 log.go:172] (0xc000ae5130) Reply frame received for 1\nI0517 00:21:12.708189 2717 log.go:172] (0xc000ae5130) (0xc00053a960) Create stream\nI0517 00:21:12.708199 2717 log.go:172] (0xc000ae5130) (0xc00053a960) Stream added, broadcasting: 3\nI0517 00:21:12.709619 2717 log.go:172] (0xc000ae5130) Reply frame received for 3\nI0517 00:21:12.709651 2717 log.go:172] (0xc000ae5130) (0xc000436c80) Create stream\nI0517 00:21:12.709665 2717 log.go:172] (0xc000ae5130) (0xc000436c80) Stream added, broadcasting: 5\nI0517 00:21:12.710628 2717 log.go:172] (0xc000ae5130) Reply frame received for 5\nI0517 00:21:12.782726 2717 log.go:172] (0xc000ae5130) Data frame received for 3\nI0517 00:21:12.782786 2717 log.go:172] (0xc00053a960) (3) Data frame handling\nI0517 00:21:12.782833 2717 log.go:172] (0xc000ae5130) Data frame received for 5\nI0517 00:21:12.782851 2717 log.go:172] (0xc000436c80) (5) Data frame handling\nI0517 00:21:12.782869 2717 log.go:172] (0xc000436c80) (5) Data frame sent\nI0517 00:21:12.782895 2717 log.go:172] (0xc000ae5130) Data frame received for 5\nI0517 00:21:12.782909 2717 log.go:172] (0xc000436c80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32459\nConnection to 172.17.0.12 32459 port [tcp/32459] succeeded!\nI0517 00:21:12.784650 2717 log.go:172] (0xc000ae5130) Data frame received for 1\nI0517 00:21:12.784692 2717 log.go:172] (0xc000633400) (1) Data frame handling\nI0517 00:21:12.784707 2717 log.go:172] (0xc000633400) (1) Data frame sent\nI0517 00:21:12.784720 2717 log.go:172] (0xc000ae5130) (0xc000633400) Stream removed, broadcasting: 1\nI0517 00:21:12.785354 2717 log.go:172] (0xc000ae5130) (0xc000633400) Stream removed, broadcasting: 1\nI0517 00:21:12.785397 2717 log.go:172] (0xc000ae5130) (0xc00053a960) Stream removed, broadcasting: 3\nI0517 00:21:12.785590 2717 log.go:172] (0xc000ae5130) Go away received\nI0517 00:21:12.785660 2717 log.go:172] (0xc000ae5130) (0xc000436c80) Stream removed, broadcasting: 5\n" May 17 00:21:12.791: INFO: stdout: "" May 17 00:21:12.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1314 execpod-affinityngjkf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32459/ ; done' May 17 00:21:13.118: INFO: stderr: "I0517 00:21:12.936983 2737 log.go:172] (0xc00003a420) (0xc00050edc0) Create stream\nI0517 00:21:12.937053 2737 log.go:172] (0xc00003a420) (0xc00050edc0) Stream added, broadcasting: 1\nI0517 00:21:12.940083 2737 log.go:172] (0xc00003a420) Reply frame received for 1\nI0517 00:21:12.940120 2737 log.go:172] (0xc00003a420) (0xc000535220) Create stream\nI0517 00:21:12.940130 2737 log.go:172] (0xc00003a420) (0xc000535220) Stream added, broadcasting: 3\nI0517 00:21:12.941369 2737 log.go:172] (0xc00003a420) Reply frame received for 3\nI0517 00:21:12.941437 2737 log.go:172] (0xc00003a420) (0xc0000dd040) Create stream\nI0517 00:21:12.941457 2737 log.go:172] (0xc00003a420) (0xc0000dd040) Stream added, broadcasting: 5\nI0517 00:21:12.942332 2737 log.go:172] (0xc00003a420) Reply frame received for 5\nI0517 00:21:12.996863 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:12.996900 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:12.996912 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:12.996924 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:12.996933 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:12.996941 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.017026 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.017067 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.017092 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.018305 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.018348 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.018364 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.018396 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.018416 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.018440 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.022675 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.022704 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.022735 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.023267 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.023290 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.023302 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.023450 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.023469 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.023499 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.026655 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.026699 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.026727 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.027151 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.027211 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.027244 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.027277 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.027302 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.027328 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.038113 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.038143 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.038161 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.038666 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.038693 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.038713 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.038734 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.038746 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.038779 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.043857 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.043873 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.043890 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.044438 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.044462 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.044490 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.044504 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.044522 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.044534 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.052747 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.052764 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.052774 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.053624 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.053650 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.053663 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.053682 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.053716 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.053734 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.058174 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.058203 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.058226 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.058339 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.058351 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.058367 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.058514 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.058538 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.058564 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.066616 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.066652 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.066692 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.067374 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.067389 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.067399 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.067410 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.067430 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.067457 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.072850 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.072872 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.072907 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.073636 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.073665 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.073678 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ I0517 00:21:13.073763 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.073784 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.073806 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\nI0517 00:21:13.073818 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.073829 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\necho\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/I0517 00:21:13.073858 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\nI0517 00:21:13.073959 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.073975 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.073983 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n\nI0517 00:21:13.073998 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.074008 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.074019 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.078695 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.078708 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.078718 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.079225 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.079248 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.079256 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.079266 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.079272 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.079279 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.083980 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.084002 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.084021 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.084528 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.084573 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.084592 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.084613 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.084627 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.084647 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.088988 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.089009 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.089028 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.090021 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.090046 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.090056 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.090071 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.090080 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.090090 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\nI0517 00:21:13.090105 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.090122 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.090158 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\nI0517 00:21:13.094736 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.094833 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.095019 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.095363 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.095431 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.095476 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.095837 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.095915 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.095981 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.102601 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.102620 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.102639 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.103282 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.103336 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.103354 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.103367 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.103378 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.103385 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\nI0517 00:21:13.103391 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.103396 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.103410 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\nI0517 00:21:13.106914 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.106934 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.106954 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.107339 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.107356 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.107370 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.107391 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.107407 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.107417 2737 log.go:172] (0xc0000dd040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.110555 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.110569 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.110578 2737 log.go:172] (0xc000535220) (3) Data frame sent\nI0517 00:21:13.110959 2737 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:21:13.110972 2737 log.go:172] (0xc000535220) (3) Data frame handling\nI0517 00:21:13.111223 2737 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:21:13.111246 2737 log.go:172] (0xc0000dd040) (5) Data frame handling\nI0517 00:21:13.113243 2737 log.go:172] (0xc00003a420) Data frame received for 1\nI0517 00:21:13.113301 2737 log.go:172] (0xc00050edc0) (1) Data frame handling\nI0517 00:21:13.113340 2737 log.go:172] (0xc00050edc0) (1) Data frame sent\nI0517 00:21:13.113382 2737 log.go:172] (0xc00003a420) (0xc00050edc0) Stream removed, broadcasting: 1\nI0517 00:21:13.113466 2737 log.go:172] (0xc00003a420) Go away received\nI0517 00:21:13.113667 2737 log.go:172] (0xc00003a420) (0xc00050edc0) Stream removed, broadcasting: 1\nI0517 00:21:13.113680 2737 log.go:172] (0xc00003a420) (0xc000535220) Stream removed, broadcasting: 3\nI0517 00:21:13.113686 2737 log.go:172] (0xc00003a420) (0xc0000dd040) Stream removed, broadcasting: 5\n" May 17 00:21:13.119: INFO: stdout: "\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-ljft9\naffinity-nodeport-transition-ljft9\naffinity-nodeport-transition-r6djc\naffinity-nodeport-transition-r6djc\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-r6djc\naffinity-nodeport-transition-ljft9\naffinity-nodeport-transition-ljft9\naffinity-nodeport-transition-r6djc\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-ljft9\naffinity-nodeport-transition-ljft9\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-ljft9" May 17 00:21:13.119: INFO: Received response from host: May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-ljft9 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-ljft9 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-r6djc May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-r6djc May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-r6djc May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-ljft9 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-ljft9 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-r6djc May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-ljft9 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-ljft9 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.119: INFO: Received response from host: affinity-nodeport-transition-ljft9 May 17 00:21:13.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1314 execpod-affinityngjkf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32459/ ; done' May 17 00:21:13.420: INFO: stderr: "I0517 00:21:13.254991 2757 log.go:172] (0xc000a204d0) (0xc0003b80a0) Create stream\nI0517 00:21:13.255062 2757 log.go:172] (0xc000a204d0) (0xc0003b80a0) Stream added, broadcasting: 1\nI0517 00:21:13.258272 2757 log.go:172] (0xc000a204d0) Reply frame received for 1\nI0517 00:21:13.258313 2757 log.go:172] (0xc000a204d0) (0xc000514500) Create stream\nI0517 00:21:13.258328 2757 log.go:172] (0xc000a204d0) (0xc000514500) Stream added, broadcasting: 3\nI0517 00:21:13.259176 2757 log.go:172] (0xc000a204d0) Reply frame received for 3\nI0517 00:21:13.259224 2757 log.go:172] (0xc000a204d0) (0xc0003828c0) Create stream\nI0517 00:21:13.259248 2757 log.go:172] (0xc000a204d0) (0xc0003828c0) Stream added, broadcasting: 5\nI0517 00:21:13.260400 2757 log.go:172] (0xc000a204d0) Reply frame received for 5\nI0517 00:21:13.321437 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.321513 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.321534 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.321564 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.321605 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.321632 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.326925 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.326952 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.326971 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.327387 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.327403 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.327411 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.327424 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.327442 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.327456 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.334267 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.334287 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.334301 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.334791 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.334807 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.334821 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.334898 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.334912 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.334926 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.338666 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.338687 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.338701 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.339114 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.339135 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.339156 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.339178 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.339197 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.339208 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.344479 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.344500 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.344531 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.344955 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.344978 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.344989 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.345002 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.345010 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.345023 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.349606 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.349633 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.349668 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.350351 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.350407 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.350436 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.350461 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.350474 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.350491 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.355012 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.355065 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.355095 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.355594 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.355610 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.355618 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.355700 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.355737 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.355778 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.360224 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.360247 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.360257 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.360837 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.360855 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.360864 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.360879 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.360887 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.360895 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.369570 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.369666 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.369763 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.370571 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.370590 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.370607 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.370797 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.370812 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.370825 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.376142 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.376169 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.376198 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.376772 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.376787 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.376799 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.376814 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.376824 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.376843 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.381429 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.381443 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.381450 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.381839 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.381855 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.381864 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.381875 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.381882 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.381888 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\nI0517 00:21:13.381913 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.381922 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.381936 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\nI0517 00:21:13.387443 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.387461 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.387476 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.387987 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.388004 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.388011 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.388022 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.388027 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.388032 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\nI0517 00:21:13.388038 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.388042 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.388053 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\nI0517 00:21:13.391393 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.391412 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.391425 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.391943 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.391968 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.391977 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.392002 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.392018 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.392026 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\nI0517 00:21:13.392039 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.392046 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.392061 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\nI0517 00:21:13.397644 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.397659 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.397669 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.398184 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.398204 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.398218 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.398237 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.398247 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.398259 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.402440 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.402466 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.402483 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.402938 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.402951 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.402956 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.402965 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.402984 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.403006 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.407719 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.407808 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.407825 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.408076 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.408092 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.408104 2757 log.go:172] (0xc0003828c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32459/\nI0517 00:21:13.408169 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.408181 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.408191 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.412956 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.412983 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.413002 2757 log.go:172] (0xc000514500) (3) Data frame sent\nI0517 00:21:13.414016 2757 log.go:172] (0xc000a204d0) Data frame received for 5\nI0517 00:21:13.414096 2757 log.go:172] (0xc0003828c0) (5) Data frame handling\nI0517 00:21:13.414158 2757 log.go:172] (0xc000a204d0) Data frame received for 3\nI0517 00:21:13.414180 2757 log.go:172] (0xc000514500) (3) Data frame handling\nI0517 00:21:13.415605 2757 log.go:172] (0xc000a204d0) Data frame received for 1\nI0517 00:21:13.415627 2757 log.go:172] (0xc0003b80a0) (1) Data frame handling\nI0517 00:21:13.415638 2757 log.go:172] (0xc0003b80a0) (1) Data frame sent\nI0517 00:21:13.415687 2757 log.go:172] (0xc000a204d0) (0xc0003b80a0) Stream removed, broadcasting: 1\nI0517 00:21:13.415733 2757 log.go:172] (0xc000a204d0) Go away received\nI0517 00:21:13.416064 2757 log.go:172] (0xc000a204d0) (0xc0003b80a0) Stream removed, broadcasting: 1\nI0517 00:21:13.416095 2757 log.go:172] (0xc000a204d0) (0xc000514500) Stream removed, broadcasting: 3\nI0517 00:21:13.416104 2757 log.go:172] (0xc000a204d0) (0xc0003828c0) Stream removed, broadcasting: 5\n" May 17 00:21:13.420: INFO: stdout: "\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55\naffinity-nodeport-transition-7wg55" May 17 00:21:13.420: INFO: Received response from host: May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.420: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.421: INFO: Received response from host: affinity-nodeport-transition-7wg55 May 17 00:21:13.421: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1314, will wait for the garbage collector to delete the pods May 17 00:21:13.550: INFO: Deleting ReplicationController affinity-nodeport-transition took: 16.262858ms May 17 00:21:14.050: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.31768ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:21:25.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1314" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:24.858 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":98,"skipped":1722,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:21:25.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 17 00:21:29.879: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:21:29.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9709" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":99,"skipped":1729,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:21:29.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:21:46.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6139" for this suite. • [SLOW TEST:16.110 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":100,"skipped":1738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:21:46.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:21:46.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-382e03f2-3757-4704-9b7e-e7694c1f88fd" in namespace "projected-7360" to be "Succeeded or Failed" May 17 00:21:46.188: INFO: Pod "downwardapi-volume-382e03f2-3757-4704-9b7e-e7694c1f88fd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.844676ms May 17 00:21:48.193: INFO: Pod "downwardapi-volume-382e03f2-3757-4704-9b7e-e7694c1f88fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017579732s May 17 00:21:50.197: INFO: Pod "downwardapi-volume-382e03f2-3757-4704-9b7e-e7694c1f88fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021562872s STEP: Saw pod success May 17 00:21:50.197: INFO: Pod "downwardapi-volume-382e03f2-3757-4704-9b7e-e7694c1f88fd" satisfied condition "Succeeded or Failed" May 17 00:21:50.200: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-382e03f2-3757-4704-9b7e-e7694c1f88fd container client-container: STEP: delete the pod May 17 00:21:50.359: INFO: Waiting for pod downwardapi-volume-382e03f2-3757-4704-9b7e-e7694c1f88fd to disappear May 17 00:21:50.365: INFO: Pod downwardapi-volume-382e03f2-3757-4704-9b7e-e7694c1f88fd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:21:50.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7360" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":101,"skipped":1837,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:21:50.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:22:01.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7898" for this suite. • [SLOW TEST:11.135 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":102,"skipped":1839,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:22:01.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2cda8bc7-5187-41ef-892d-2f15552c8e84 STEP: Creating a pod to test consume secrets May 17 00:22:01.649: INFO: Waiting up to 5m0s for pod "pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254" in namespace "secrets-2825" to be "Succeeded or Failed" May 17 00:22:01.653: INFO: Pod "pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254": Phase="Pending", Reason="", readiness=false. Elapsed: 3.369658ms May 17 00:22:03.657: INFO: Pod "pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007712017s May 17 00:22:05.662: INFO: Pod "pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254": Phase="Running", Reason="", readiness=true. Elapsed: 4.012640299s May 17 00:22:07.667: INFO: Pod "pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017391318s STEP: Saw pod success May 17 00:22:07.667: INFO: Pod "pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254" satisfied condition "Succeeded or Failed" May 17 00:22:07.670: INFO: Trying to get logs from node latest-worker pod pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254 container secret-volume-test: STEP: delete the pod May 17 00:22:07.751: INFO: Waiting for pod pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254 to disappear May 17 00:22:07.753: INFO: Pod pod-secrets-046e6acb-d6bd-42ed-af63-fca037a45254 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:22:07.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2825" for this suite. • [SLOW TEST:6.248 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1853,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:22:07.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:22:07.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5" in namespace "projected-4339" to be "Succeeded or Failed" May 17 00:22:07.892: INFO: Pod "downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.707711ms May 17 00:22:09.896: INFO: Pod "downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010373893s May 17 00:22:11.916: INFO: Pod "downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5": Phase="Running", Reason="", readiness=true. Elapsed: 4.030948702s May 17 00:22:13.920: INFO: Pod "downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035097812s STEP: Saw pod success May 17 00:22:13.920: INFO: Pod "downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5" satisfied condition "Succeeded or Failed" May 17 00:22:13.924: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5 container client-container: STEP: delete the pod May 17 00:22:14.035: INFO: Waiting for pod downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5 to disappear May 17 00:22:14.042: INFO: Pod downwardapi-volume-750ac8d2-67d6-4273-954c-c73355d4cff5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:22:14.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4339" for this suite. • [SLOW TEST:6.290 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1855,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:22:14.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4851 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4851 I0517 00:22:14.241392 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4851, replica count: 2 I0517 00:22:17.291825 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:22:20.292077 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 00:22:20.292: INFO: Creating new exec pod May 17 00:22:25.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4851 execpoddrp8w -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 17 00:22:25.542: INFO: stderr: "I0517 00:22:25.464204 2777 log.go:172] (0xc000b6f130) (0xc0006755e0) Create stream\nI0517 00:22:25.464429 2777 log.go:172] (0xc000b6f130) (0xc0006755e0) Stream added, broadcasting: 1\nI0517 00:22:25.467109 2777 log.go:172] (0xc000b6f130) Reply frame received for 1\nI0517 00:22:25.467143 2777 log.go:172] (0xc000b6f130) (0xc0003b1a40) Create stream\nI0517 00:22:25.467156 2777 log.go:172] (0xc000b6f130) (0xc0003b1a40) Stream added, broadcasting: 3\nI0517 00:22:25.468039 2777 log.go:172] (0xc000b6f130) Reply frame received for 3\nI0517 00:22:25.468061 2777 log.go:172] (0xc000b6f130) (0xc0006c0000) Create stream\nI0517 00:22:25.468078 2777 log.go:172] (0xc000b6f130) (0xc0006c0000) Stream added, broadcasting: 5\nI0517 00:22:25.468957 2777 log.go:172] (0xc000b6f130) Reply frame received for 5\nI0517 00:22:25.533908 2777 log.go:172] (0xc000b6f130) Data frame received for 5\nI0517 00:22:25.533947 2777 log.go:172] (0xc0006c0000) (5) Data frame handling\nI0517 00:22:25.533978 2777 log.go:172] (0xc0006c0000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0517 00:22:25.534453 2777 log.go:172] (0xc000b6f130) Data frame received for 5\nI0517 00:22:25.534477 2777 log.go:172] (0xc0006c0000) (5) Data frame handling\nI0517 00:22:25.534495 2777 log.go:172] (0xc0006c0000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0517 00:22:25.534521 2777 log.go:172] (0xc000b6f130) Data frame received for 3\nI0517 00:22:25.534548 2777 log.go:172] (0xc0003b1a40) (3) Data frame handling\nI0517 00:22:25.534918 2777 log.go:172] (0xc000b6f130) Data frame received for 5\nI0517 00:22:25.534941 2777 log.go:172] (0xc0006c0000) (5) Data frame handling\nI0517 00:22:25.536270 2777 log.go:172] (0xc000b6f130) Data frame received for 1\nI0517 00:22:25.536296 2777 log.go:172] (0xc0006755e0) (1) Data frame handling\nI0517 00:22:25.536317 2777 log.go:172] (0xc0006755e0) (1) Data frame sent\nI0517 00:22:25.536331 2777 log.go:172] (0xc000b6f130) (0xc0006755e0) Stream removed, broadcasting: 1\nI0517 00:22:25.536347 2777 log.go:172] (0xc000b6f130) Go away received\nI0517 00:22:25.536686 2777 log.go:172] (0xc000b6f130) (0xc0006755e0) Stream removed, broadcasting: 1\nI0517 00:22:25.536702 2777 log.go:172] (0xc000b6f130) (0xc0003b1a40) Stream removed, broadcasting: 3\nI0517 00:22:25.536711 2777 log.go:172] (0xc000b6f130) (0xc0006c0000) Stream removed, broadcasting: 5\n" May 17 00:22:25.542: INFO: stdout: "" May 17 00:22:25.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4851 execpoddrp8w -- /bin/sh -x -c nc -zv -t -w 2 10.107.9.144 80' May 17 00:22:25.737: INFO: stderr: "I0517 00:22:25.674727 2797 log.go:172] (0xc000b771e0) (0xc000a8c500) Create stream\nI0517 00:22:25.674796 2797 log.go:172] (0xc000b771e0) (0xc000a8c500) Stream added, broadcasting: 1\nI0517 00:22:25.679406 2797 log.go:172] (0xc000b771e0) Reply frame received for 1\nI0517 00:22:25.679472 2797 log.go:172] (0xc000b771e0) (0xc0005b0320) Create stream\nI0517 00:22:25.679487 2797 log.go:172] (0xc000b771e0) (0xc0005b0320) Stream added, broadcasting: 3\nI0517 00:22:25.680434 2797 log.go:172] (0xc000b771e0) Reply frame received for 3\nI0517 00:22:25.680469 2797 log.go:172] (0xc000b771e0) (0xc000504e60) Create stream\nI0517 00:22:25.680479 2797 log.go:172] (0xc000b771e0) (0xc000504e60) Stream added, broadcasting: 5\nI0517 00:22:25.681093 2797 log.go:172] (0xc000b771e0) Reply frame received for 5\nI0517 00:22:25.728955 2797 log.go:172] (0xc000b771e0) Data frame received for 5\nI0517 00:22:25.728985 2797 log.go:172] (0xc000504e60) (5) Data frame handling\nI0517 00:22:25.729014 2797 log.go:172] (0xc000504e60) (5) Data frame sent\nI0517 00:22:25.729030 2797 log.go:172] (0xc000b771e0) Data frame received for 5\n+ nc -zv -t -w 2 10.107.9.144 80\nConnection to 10.107.9.144 80 port [tcp/http] succeeded!\nI0517 00:22:25.729041 2797 log.go:172] (0xc000504e60) (5) Data frame handling\nI0517 00:22:25.729336 2797 log.go:172] (0xc000b771e0) Data frame received for 3\nI0517 00:22:25.729379 2797 log.go:172] (0xc0005b0320) (3) Data frame handling\nI0517 00:22:25.730857 2797 log.go:172] (0xc000b771e0) Data frame received for 1\nI0517 00:22:25.730885 2797 log.go:172] (0xc000a8c500) (1) Data frame handling\nI0517 00:22:25.730918 2797 log.go:172] (0xc000a8c500) (1) Data frame sent\nI0517 00:22:25.730943 2797 log.go:172] (0xc000b771e0) (0xc000a8c500) Stream removed, broadcasting: 1\nI0517 00:22:25.730963 2797 log.go:172] (0xc000b771e0) Go away received\nI0517 00:22:25.731260 2797 log.go:172] (0xc000b771e0) (0xc000a8c500) Stream removed, broadcasting: 1\nI0517 00:22:25.731276 2797 log.go:172] (0xc000b771e0) (0xc0005b0320) Stream removed, broadcasting: 3\nI0517 00:22:25.731283 2797 log.go:172] (0xc000b771e0) (0xc000504e60) Stream removed, broadcasting: 5\n" May 17 00:22:25.737: INFO: stdout: "" May 17 00:22:25.737: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:22:25.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4851" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.735 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":105,"skipped":1862,"failed":0} SSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:22:25.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:22:25.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5443" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":106,"skipped":1866,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:22:25.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6234.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6234.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6234.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 00:22:32.176: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:32.180: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:32.184: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:32.187: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:32.195: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:32.198: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:32.201: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:32.204: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:32.210: INFO: Lookups using dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local] May 17 00:22:37.216: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:37.220: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:37.223: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:37.226: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:37.232: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:37.235: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:37.237: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:37.240: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:37.245: INFO: Lookups using dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local] May 17 00:22:42.215: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:42.219: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:42.223: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:42.226: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:42.234: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:42.237: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:42.240: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:42.242: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:42.248: INFO: Lookups using dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local] May 17 00:22:47.215: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:47.218: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:47.221: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:47.224: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:47.231: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:47.234: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:47.237: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:47.239: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:47.245: INFO: Lookups using dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local] May 17 00:22:52.214: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:52.218: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:52.221: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:52.224: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:52.232: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:52.234: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:52.237: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:52.239: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:52.244: INFO: Lookups using dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local] May 17 00:22:57.217: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:57.220: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:57.223: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:57.226: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:57.232: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:57.235: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:57.237: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:57.239: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local from pod dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd: the server could not find the requested resource (get pods dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd) May 17 00:22:57.245: INFO: Lookups using dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6234.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6234.svc.cluster.local jessie_udp@dns-test-service-2.dns-6234.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6234.svc.cluster.local] May 17 00:23:02.253: INFO: DNS probes using dns-6234/dns-test-42f94476-2c26-4c42-8ea8-8b7dde3c25bd succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:23:02.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6234" for this suite. • [SLOW TEST:36.948 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":107,"skipped":1867,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:23:02.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7467 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7467 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7467 May 17 00:23:03.016: INFO: Found 0 stateful pods, waiting for 1 May 17 00:23:13.034: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 17 00:23:13.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7467 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 17 00:23:13.299: INFO: stderr: "I0517 00:23:13.175883 2818 log.go:172] (0xc000a31550) (0xc0006d75e0) Create stream\nI0517 00:23:13.175938 2818 log.go:172] (0xc000a31550) (0xc0006d75e0) Stream added, broadcasting: 1\nI0517 00:23:13.180789 2818 log.go:172] (0xc000a31550) Reply frame received for 1\nI0517 00:23:13.180857 2818 log.go:172] (0xc000a31550) (0xc000630f00) Create stream\nI0517 00:23:13.180886 2818 log.go:172] (0xc000a31550) (0xc000630f00) Stream added, broadcasting: 3\nI0517 00:23:13.182047 2818 log.go:172] (0xc000a31550) Reply frame received for 3\nI0517 00:23:13.182082 2818 log.go:172] (0xc000a31550) (0xc000560320) Create stream\nI0517 00:23:13.182092 2818 log.go:172] (0xc000a31550) (0xc000560320) Stream added, broadcasting: 5\nI0517 00:23:13.183263 2818 log.go:172] (0xc000a31550) Reply frame received for 5\nI0517 00:23:13.266467 2818 log.go:172] (0xc000a31550) Data frame received for 5\nI0517 00:23:13.266504 2818 log.go:172] (0xc000560320) (5) Data frame handling\nI0517 00:23:13.266536 2818 log.go:172] (0xc000560320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0517 00:23:13.291406 2818 log.go:172] (0xc000a31550) Data frame received for 3\nI0517 00:23:13.291446 2818 log.go:172] (0xc000630f00) (3) Data frame handling\nI0517 00:23:13.291457 2818 log.go:172] (0xc000630f00) (3) Data frame sent\nI0517 00:23:13.291464 2818 log.go:172] (0xc000a31550) Data frame received for 3\nI0517 00:23:13.291471 2818 log.go:172] (0xc000630f00) (3) Data frame handling\nI0517 00:23:13.291510 2818 log.go:172] (0xc000a31550) Data frame received for 5\nI0517 00:23:13.291519 2818 log.go:172] (0xc000560320) (5) Data frame handling\nI0517 00:23:13.293812 2818 log.go:172] (0xc000a31550) Data frame received for 1\nI0517 00:23:13.293845 2818 log.go:172] (0xc0006d75e0) (1) Data frame handling\nI0517 00:23:13.293865 2818 log.go:172] (0xc0006d75e0) (1) Data frame sent\nI0517 00:23:13.293891 2818 log.go:172] (0xc000a31550) (0xc0006d75e0) Stream removed, broadcasting: 1\nI0517 00:23:13.294013 2818 log.go:172] (0xc000a31550) Go away received\nI0517 00:23:13.294335 2818 log.go:172] (0xc000a31550) (0xc0006d75e0) Stream removed, broadcasting: 1\nI0517 00:23:13.294356 2818 log.go:172] (0xc000a31550) (0xc000630f00) Stream removed, broadcasting: 3\nI0517 00:23:13.294368 2818 log.go:172] (0xc000a31550) (0xc000560320) Stream removed, broadcasting: 5\n" May 17 00:23:13.300: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 17 00:23:13.300: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 17 00:23:13.303: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 17 00:23:23.307: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 17 00:23:23.307: INFO: Waiting for statefulset status.replicas updated to 0 May 17 00:23:23.319: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999591s May 17 00:23:24.324: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996089512s May 17 00:23:25.329: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991132944s May 17 00:23:26.334: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986196217s May 17 00:23:27.339: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980945284s May 17 00:23:28.344: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975786013s May 17 00:23:29.349: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970936085s May 17 00:23:30.354: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.966233275s May 17 00:23:31.359: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.961240287s May 17 00:23:32.363: INFO: Verifying statefulset ss doesn't scale past 1 for another 956.505781ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7467 May 17 00:23:33.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7467 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 17 00:23:33.617: INFO: stderr: "I0517 00:23:33.508782 2839 log.go:172] (0xc000b1f760) (0xc000832f00) Create stream\nI0517 00:23:33.508851 2839 log.go:172] (0xc000b1f760) (0xc000832f00) Stream added, broadcasting: 1\nI0517 00:23:33.512789 2839 log.go:172] (0xc000b1f760) Reply frame received for 1\nI0517 00:23:33.512867 2839 log.go:172] (0xc000b1f760) (0xc000827b80) Create stream\nI0517 00:23:33.512888 2839 log.go:172] (0xc000b1f760) (0xc000827b80) Stream added, broadcasting: 3\nI0517 00:23:33.514134 2839 log.go:172] (0xc000b1f760) Reply frame received for 3\nI0517 00:23:33.514166 2839 log.go:172] (0xc000b1f760) (0xc000820c80) Create stream\nI0517 00:23:33.514176 2839 log.go:172] (0xc000b1f760) (0xc000820c80) Stream added, broadcasting: 5\nI0517 00:23:33.514869 2839 log.go:172] (0xc000b1f760) Reply frame received for 5\nI0517 00:23:33.606683 2839 log.go:172] (0xc000b1f760) Data frame received for 5\nI0517 00:23:33.606735 2839 log.go:172] (0xc000820c80) (5) Data frame handling\nI0517 00:23:33.606758 2839 log.go:172] (0xc000820c80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0517 00:23:33.606818 2839 log.go:172] (0xc000b1f760) Data frame received for 5\nI0517 00:23:33.606839 2839 log.go:172] (0xc000820c80) (5) Data frame handling\nI0517 00:23:33.606870 2839 log.go:172] (0xc000b1f760) Data frame received for 3\nI0517 00:23:33.606887 2839 log.go:172] (0xc000827b80) (3) Data frame handling\nI0517 00:23:33.606904 2839 log.go:172] (0xc000827b80) (3) Data frame sent\nI0517 00:23:33.606922 2839 log.go:172] (0xc000b1f760) Data frame received for 3\nI0517 00:23:33.606938 2839 log.go:172] (0xc000827b80) (3) Data frame handling\nI0517 00:23:33.610749 2839 log.go:172] (0xc000b1f760) Data frame received for 1\nI0517 00:23:33.610785 2839 log.go:172] (0xc000832f00) (1) Data frame handling\nI0517 00:23:33.610806 2839 log.go:172] (0xc000832f00) (1) Data frame sent\nI0517 00:23:33.610824 2839 log.go:172] (0xc000b1f760) (0xc000832f00) Stream removed, broadcasting: 1\nI0517 00:23:33.610849 2839 log.go:172] (0xc000b1f760) Go away received\nI0517 00:23:33.611198 2839 log.go:172] (0xc000b1f760) (0xc000832f00) Stream removed, broadcasting: 1\nI0517 00:23:33.611238 2839 log.go:172] (0xc000b1f760) (0xc000827b80) Stream removed, broadcasting: 3\nI0517 00:23:33.611251 2839 log.go:172] (0xc000b1f760) (0xc000820c80) Stream removed, broadcasting: 5\n" May 17 00:23:33.617: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 17 00:23:33.617: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 17 00:23:33.620: INFO: Found 1 stateful pods, waiting for 3 May 17 00:23:43.624: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 17 00:23:43.624: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 17 00:23:43.624: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 17 00:23:43.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7467 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 17 00:23:43.892: INFO: stderr: "I0517 00:23:43.785091 2860 log.go:172] (0xc000ab7600) (0xc0006ad040) Create stream\nI0517 00:23:43.785334 2860 log.go:172] (0xc000ab7600) (0xc0006ad040) Stream added, broadcasting: 1\nI0517 00:23:43.787418 2860 log.go:172] (0xc000ab7600) Reply frame received for 1\nI0517 00:23:43.787462 2860 log.go:172] (0xc000ab7600) (0xc0004fba40) Create stream\nI0517 00:23:43.787472 2860 log.go:172] (0xc000ab7600) (0xc0004fba40) Stream added, broadcasting: 3\nI0517 00:23:43.788367 2860 log.go:172] (0xc000ab7600) Reply frame received for 3\nI0517 00:23:43.788401 2860 log.go:172] (0xc000ab7600) (0xc0006ad5e0) Create stream\nI0517 00:23:43.788421 2860 log.go:172] (0xc000ab7600) (0xc0006ad5e0) Stream added, broadcasting: 5\nI0517 00:23:43.789453 2860 log.go:172] (0xc000ab7600) Reply frame received for 5\nI0517 00:23:43.886088 2860 log.go:172] (0xc000ab7600) Data frame received for 3\nI0517 00:23:43.886140 2860 log.go:172] (0xc0004fba40) (3) Data frame handling\nI0517 00:23:43.886162 2860 log.go:172] (0xc0004fba40) (3) Data frame sent\nI0517 00:23:43.886184 2860 log.go:172] (0xc000ab7600) Data frame received for 3\nI0517 00:23:43.886199 2860 log.go:172] (0xc0004fba40) (3) Data frame handling\nI0517 00:23:43.886211 2860 log.go:172] (0xc000ab7600) Data frame received for 5\nI0517 00:23:43.886219 2860 log.go:172] (0xc0006ad5e0) (5) Data frame handling\nI0517 00:23:43.886228 2860 log.go:172] (0xc0006ad5e0) (5) Data frame sent\nI0517 00:23:43.886239 2860 log.go:172] (0xc000ab7600) Data frame received for 5\nI0517 00:23:43.886247 2860 log.go:172] (0xc0006ad5e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0517 00:23:43.887543 2860 log.go:172] (0xc000ab7600) Data frame received for 1\nI0517 00:23:43.887576 2860 log.go:172] (0xc0006ad040) (1) Data frame handling\nI0517 00:23:43.887608 2860 log.go:172] (0xc0006ad040) (1) Data frame sent\nI0517 00:23:43.887629 2860 log.go:172] (0xc000ab7600) (0xc0006ad040) Stream removed, broadcasting: 1\nI0517 00:23:43.887652 2860 log.go:172] (0xc000ab7600) Go away received\nI0517 00:23:43.888162 2860 log.go:172] (0xc000ab7600) (0xc0006ad040) Stream removed, broadcasting: 1\nI0517 00:23:43.888185 2860 log.go:172] (0xc000ab7600) (0xc0004fba40) Stream removed, broadcasting: 3\nI0517 00:23:43.888199 2860 log.go:172] (0xc000ab7600) (0xc0006ad5e0) Stream removed, broadcasting: 5\n" May 17 00:23:43.892: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 17 00:23:43.892: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 17 00:23:43.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7467 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 17 00:23:44.130: INFO: stderr: "I0517 00:23:44.023431 2879 log.go:172] (0xc000a6c000) (0xc00013f0e0) Create stream\nI0517 00:23:44.023495 2879 log.go:172] (0xc000a6c000) (0xc00013f0e0) Stream added, broadcasting: 1\nI0517 00:23:44.028353 2879 log.go:172] (0xc000a6c000) Reply frame received for 1\nI0517 00:23:44.028480 2879 log.go:172] (0xc000a6c000) (0xc000397c20) Create stream\nI0517 00:23:44.028559 2879 log.go:172] (0xc000a6c000) (0xc000397c20) Stream added, broadcasting: 3\nI0517 00:23:44.029914 2879 log.go:172] (0xc000a6c000) Reply frame received for 3\nI0517 00:23:44.030006 2879 log.go:172] (0xc000a6c000) (0xc00013fc20) Create stream\nI0517 00:23:44.030074 2879 log.go:172] (0xc000a6c000) (0xc00013fc20) Stream added, broadcasting: 5\nI0517 00:23:44.031234 2879 log.go:172] (0xc000a6c000) Reply frame received for 5\nI0517 00:23:44.088535 2879 log.go:172] (0xc000a6c000) Data frame received for 5\nI0517 00:23:44.088564 2879 log.go:172] (0xc00013fc20) (5) Data frame handling\nI0517 00:23:44.088585 2879 log.go:172] (0xc00013fc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0517 00:23:44.122495 2879 log.go:172] (0xc000a6c000) Data frame received for 3\nI0517 00:23:44.122521 2879 log.go:172] (0xc000397c20) (3) Data frame handling\nI0517 00:23:44.122531 2879 log.go:172] (0xc000397c20) (3) Data frame sent\nI0517 00:23:44.122542 2879 log.go:172] (0xc000a6c000) Data frame received for 3\nI0517 00:23:44.122552 2879 log.go:172] (0xc000397c20) (3) Data frame handling\nI0517 00:23:44.122637 2879 log.go:172] (0xc000a6c000) Data frame received for 5\nI0517 00:23:44.122655 2879 log.go:172] (0xc00013fc20) (5) Data frame handling\nI0517 00:23:44.124939 2879 log.go:172] (0xc000a6c000) Data frame received for 1\nI0517 00:23:44.124995 2879 log.go:172] (0xc00013f0e0) (1) Data frame handling\nI0517 00:23:44.125018 2879 log.go:172] (0xc00013f0e0) (1) Data frame sent\nI0517 00:23:44.125029 2879 log.go:172] (0xc000a6c000) (0xc00013f0e0) Stream removed, broadcasting: 1\nI0517 00:23:44.125041 2879 log.go:172] (0xc000a6c000) Go away received\nI0517 00:23:44.125678 2879 log.go:172] (0xc000a6c000) (0xc00013f0e0) Stream removed, broadcasting: 1\nI0517 00:23:44.125693 2879 log.go:172] (0xc000a6c000) (0xc000397c20) Stream removed, broadcasting: 3\nI0517 00:23:44.125698 2879 log.go:172] (0xc000a6c000) (0xc00013fc20) Stream removed, broadcasting: 5\n" May 17 00:23:44.130: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 17 00:23:44.130: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 17 00:23:44.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7467 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 17 00:23:44.364: INFO: stderr: "I0517 00:23:44.249263 2902 log.go:172] (0xc000ba9290) (0xc000445860) Create stream\nI0517 00:23:44.249310 2902 log.go:172] (0xc000ba9290) (0xc000445860) Stream added, broadcasting: 1\nI0517 00:23:44.250834 2902 log.go:172] (0xc000ba9290) Reply frame received for 1\nI0517 00:23:44.250866 2902 log.go:172] (0xc000ba9290) (0xc000503040) Create stream\nI0517 00:23:44.250881 2902 log.go:172] (0xc000ba9290) (0xc000503040) Stream added, broadcasting: 3\nI0517 00:23:44.251698 2902 log.go:172] (0xc000ba9290) Reply frame received for 3\nI0517 00:23:44.251734 2902 log.go:172] (0xc000ba9290) (0xc00050c960) Create stream\nI0517 00:23:44.251751 2902 log.go:172] (0xc000ba9290) (0xc00050c960) Stream added, broadcasting: 5\nI0517 00:23:44.252684 2902 log.go:172] (0xc000ba9290) Reply frame received for 5\nI0517 00:23:44.326380 2902 log.go:172] (0xc000ba9290) Data frame received for 5\nI0517 00:23:44.326409 2902 log.go:172] (0xc00050c960) (5) Data frame handling\nI0517 00:23:44.326429 2902 log.go:172] (0xc00050c960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0517 00:23:44.356386 2902 log.go:172] (0xc000ba9290) Data frame received for 3\nI0517 00:23:44.356432 2902 log.go:172] (0xc000503040) (3) Data frame handling\nI0517 00:23:44.356452 2902 log.go:172] (0xc000503040) (3) Data frame sent\nI0517 00:23:44.356799 2902 log.go:172] (0xc000ba9290) Data frame received for 5\nI0517 00:23:44.356864 2902 log.go:172] (0xc00050c960) (5) Data frame handling\nI0517 00:23:44.357370 2902 log.go:172] (0xc000ba9290) Data frame received for 3\nI0517 00:23:44.357383 2902 log.go:172] (0xc000503040) (3) Data frame handling\nI0517 00:23:44.358946 2902 log.go:172] (0xc000ba9290) Data frame received for 1\nI0517 00:23:44.358972 2902 log.go:172] (0xc000445860) (1) Data frame handling\nI0517 00:23:44.358999 2902 log.go:172] (0xc000445860) (1) Data frame sent\nI0517 00:23:44.359019 2902 log.go:172] (0xc000ba9290) (0xc000445860) Stream removed, broadcasting: 1\nI0517 00:23:44.359406 2902 log.go:172] (0xc000ba9290) Go away received\nI0517 00:23:44.359482 2902 log.go:172] (0xc000ba9290) (0xc000445860) Stream removed, broadcasting: 1\nI0517 00:23:44.359517 2902 log.go:172] (0xc000ba9290) (0xc000503040) Stream removed, broadcasting: 3\nI0517 00:23:44.359532 2902 log.go:172] (0xc000ba9290) (0xc00050c960) Stream removed, broadcasting: 5\n" May 17 00:23:44.365: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 17 00:23:44.365: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 17 00:23:44.365: INFO: Waiting for statefulset status.replicas updated to 0 May 17 00:23:44.370: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 17 00:23:54.376: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 17 00:23:54.376: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 17 00:23:54.376: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 17 00:23:54.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999741s May 17 00:23:55.393: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993288823s May 17 00:23:56.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988050192s May 17 00:23:57.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982670433s May 17 00:23:58.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97820218s May 17 00:23:59.417: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970171397s May 17 00:24:00.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963998043s May 17 00:24:01.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958293309s May 17 00:24:02.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.952145459s May 17 00:24:03.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 947.065387ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7467 May 17 00:24:04.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7467 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 17 00:24:04.691: INFO: stderr: "I0517 00:24:04.595356 2922 log.go:172] (0xc00003a8f0) (0xc0005559a0) Create stream\nI0517 00:24:04.595446 2922 log.go:172] (0xc00003a8f0) (0xc0005559a0) Stream added, broadcasting: 1\nI0517 00:24:04.596813 2922 log.go:172] (0xc00003a8f0) Reply frame received for 1\nI0517 00:24:04.596833 2922 log.go:172] (0xc00003a8f0) (0xc0009a1a40) Create stream\nI0517 00:24:04.596840 2922 log.go:172] (0xc00003a8f0) (0xc0009a1a40) Stream added, broadcasting: 3\nI0517 00:24:04.598000 2922 log.go:172] (0xc00003a8f0) Reply frame received for 3\nI0517 00:24:04.598023 2922 log.go:172] (0xc00003a8f0) (0xc00024e820) Create stream\nI0517 00:24:04.598029 2922 log.go:172] (0xc00003a8f0) (0xc00024e820) Stream added, broadcasting: 5\nI0517 00:24:04.598895 2922 log.go:172] (0xc00003a8f0) Reply frame received for 5\nI0517 00:24:04.684165 2922 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0517 00:24:04.684235 2922 log.go:172] (0xc00024e820) (5) Data frame handling\nI0517 00:24:04.684252 2922 log.go:172] (0xc00024e820) (5) Data frame sent\nI0517 00:24:04.684266 2922 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0517 00:24:04.684294 2922 log.go:172] (0xc00024e820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0517 00:24:04.684352 2922 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0517 00:24:04.684392 2922 log.go:172] (0xc0009a1a40) (3) Data frame handling\nI0517 00:24:04.684420 2922 log.go:172] (0xc0009a1a40) (3) Data frame sent\nI0517 00:24:04.684449 2922 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0517 00:24:04.684465 2922 log.go:172] (0xc0009a1a40) (3) Data frame handling\nI0517 00:24:04.686013 2922 log.go:172] (0xc00003a8f0) Data frame received for 1\nI0517 00:24:04.686043 2922 log.go:172] (0xc0005559a0) (1) Data frame handling\nI0517 00:24:04.686058 2922 log.go:172] (0xc0005559a0) (1) Data frame sent\nI0517 00:24:04.686078 2922 log.go:172] (0xc00003a8f0) (0xc0005559a0) Stream removed, broadcasting: 1\nI0517 00:24:04.686127 2922 log.go:172] (0xc00003a8f0) Go away received\nI0517 00:24:04.686466 2922 log.go:172] (0xc00003a8f0) (0xc0005559a0) Stream removed, broadcasting: 1\nI0517 00:24:04.686497 2922 log.go:172] (0xc00003a8f0) (0xc0009a1a40) Stream removed, broadcasting: 3\nI0517 00:24:04.686507 2922 log.go:172] (0xc00003a8f0) (0xc00024e820) Stream removed, broadcasting: 5\n" May 17 00:24:04.691: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 17 00:24:04.691: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 17 00:24:04.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7467 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 17 00:24:04.908: INFO: stderr: "I0517 00:24:04.823230 2940 log.go:172] (0xc00092b130) (0xc000b2a320) Create stream\nI0517 00:24:04.823280 2940 log.go:172] (0xc00092b130) (0xc000b2a320) Stream added, broadcasting: 1\nI0517 00:24:04.826708 2940 log.go:172] (0xc00092b130) Reply frame received for 1\nI0517 00:24:04.826739 2940 log.go:172] (0xc00092b130) (0xc00082fc20) Create stream\nI0517 00:24:04.826756 2940 log.go:172] (0xc00092b130) (0xc00082fc20) Stream added, broadcasting: 3\nI0517 00:24:04.827570 2940 log.go:172] (0xc00092b130) Reply frame received for 3\nI0517 00:24:04.827659 2940 log.go:172] (0xc00092b130) (0xc000b280a0) Create stream\nI0517 00:24:04.827695 2940 log.go:172] (0xc00092b130) (0xc000b280a0) Stream added, broadcasting: 5\nI0517 00:24:04.828940 2940 log.go:172] (0xc00092b130) Reply frame received for 5\nI0517 00:24:04.902134 2940 log.go:172] (0xc00092b130) Data frame received for 5\nI0517 00:24:04.902162 2940 log.go:172] (0xc000b280a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0517 00:24:04.902184 2940 log.go:172] (0xc00092b130) Data frame received for 3\nI0517 00:24:04.902209 2940 log.go:172] (0xc00082fc20) (3) Data frame handling\nI0517 00:24:04.902231 2940 log.go:172] (0xc00082fc20) (3) Data frame sent\nI0517 00:24:04.902242 2940 log.go:172] (0xc00092b130) Data frame received for 3\nI0517 00:24:04.902259 2940 log.go:172] (0xc00082fc20) (3) Data frame handling\nI0517 00:24:04.902290 2940 log.go:172] (0xc000b280a0) (5) Data frame sent\nI0517 00:24:04.902308 2940 log.go:172] (0xc00092b130) Data frame received for 5\nI0517 00:24:04.902317 2940 log.go:172] (0xc000b280a0) (5) Data frame handling\nI0517 00:24:04.903873 2940 log.go:172] (0xc00092b130) Data frame received for 1\nI0517 00:24:04.903972 2940 log.go:172] (0xc000b2a320) (1) Data frame handling\nI0517 00:24:04.904010 2940 log.go:172] (0xc000b2a320) (1) Data frame sent\nI0517 00:24:04.904027 2940 log.go:172] (0xc00092b130) (0xc000b2a320) Stream removed, broadcasting: 1\nI0517 00:24:04.904221 2940 log.go:172] (0xc00092b130) Go away received\nI0517 00:24:04.904248 2940 log.go:172] (0xc00092b130) (0xc000b2a320) Stream removed, broadcasting: 1\nI0517 00:24:04.904260 2940 log.go:172] (0xc00092b130) (0xc00082fc20) Stream removed, broadcasting: 3\nI0517 00:24:04.904272 2940 log.go:172] (0xc00092b130) (0xc000b280a0) Stream removed, broadcasting: 5\n" May 17 00:24:04.908: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 17 00:24:04.908: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 17 00:24:04.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7467 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 17 00:24:05.132: INFO: stderr: "I0517 00:24:05.048710 2960 log.go:172] (0xc0009d0f20) (0xc0006ccfa0) Create stream\nI0517 00:24:05.048777 2960 log.go:172] (0xc0009d0f20) (0xc0006ccfa0) Stream added, broadcasting: 1\nI0517 00:24:05.052077 2960 log.go:172] (0xc0009d0f20) Reply frame received for 1\nI0517 00:24:05.052122 2960 log.go:172] (0xc0009d0f20) (0xc000558820) Create stream\nI0517 00:24:05.052154 2960 log.go:172] (0xc0009d0f20) (0xc000558820) Stream added, broadcasting: 3\nI0517 00:24:05.053487 2960 log.go:172] (0xc0009d0f20) Reply frame received for 3\nI0517 00:24:05.053535 2960 log.go:172] (0xc0009d0f20) (0xc000419c20) Create stream\nI0517 00:24:05.053563 2960 log.go:172] (0xc0009d0f20) (0xc000419c20) Stream added, broadcasting: 5\nI0517 00:24:05.054940 2960 log.go:172] (0xc0009d0f20) Reply frame received for 5\nI0517 00:24:05.123973 2960 log.go:172] (0xc0009d0f20) Data frame received for 5\nI0517 00:24:05.124022 2960 log.go:172] (0xc000419c20) (5) Data frame handling\nI0517 00:24:05.124038 2960 log.go:172] (0xc000419c20) (5) Data frame sent\nI0517 00:24:05.124049 2960 log.go:172] (0xc0009d0f20) Data frame received for 5\nI0517 00:24:05.124071 2960 log.go:172] (0xc000419c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0517 00:24:05.124108 2960 log.go:172] (0xc0009d0f20) Data frame received for 3\nI0517 00:24:05.124138 2960 log.go:172] (0xc000558820) (3) Data frame handling\nI0517 00:24:05.124167 2960 log.go:172] (0xc000558820) (3) Data frame sent\nI0517 00:24:05.124185 2960 log.go:172] (0xc0009d0f20) Data frame received for 3\nI0517 00:24:05.124197 2960 log.go:172] (0xc000558820) (3) Data frame handling\nI0517 00:24:05.125744 2960 log.go:172] (0xc0009d0f20) Data frame received for 1\nI0517 00:24:05.125762 2960 log.go:172] (0xc0006ccfa0) (1) Data frame handling\nI0517 00:24:05.125772 2960 log.go:172] (0xc0006ccfa0) (1) Data frame sent\nI0517 00:24:05.125787 2960 log.go:172] (0xc0009d0f20) (0xc0006ccfa0) Stream removed, broadcasting: 1\nI0517 00:24:05.125813 2960 log.go:172] (0xc0009d0f20) Go away received\nI0517 00:24:05.126251 2960 log.go:172] (0xc0009d0f20) (0xc0006ccfa0) Stream removed, broadcasting: 1\nI0517 00:24:05.126281 2960 log.go:172] (0xc0009d0f20) (0xc000558820) Stream removed, broadcasting: 3\nI0517 00:24:05.126306 2960 log.go:172] (0xc0009d0f20) (0xc000419c20) Stream removed, broadcasting: 5\n" May 17 00:24:05.132: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 17 00:24:05.132: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 17 00:24:05.132: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 17 00:24:35.160: INFO: Deleting all statefulset in ns statefulset-7467 May 17 00:24:35.163: INFO: Scaling statefulset ss to 0 May 17 00:24:35.170: INFO: Waiting for statefulset status.replicas updated to 0 May 17 00:24:35.172: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:24:35.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7467" for this suite. • [SLOW TEST:92.300 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":108,"skipped":1868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:24:35.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-8690 STEP: creating replication controller nodeport-test in namespace services-8690 I0517 00:24:35.418335 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8690, replica count: 2 I0517 00:24:38.468787 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:24:41.469066 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 00:24:41.469: INFO: Creating new exec pod May 17 00:24:46.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8690 execpodnpgg9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 17 00:24:46.771: INFO: stderr: "I0517 00:24:46.666314 2981 log.go:172] (0xc000b691e0) (0xc000752dc0) Create stream\nI0517 00:24:46.666377 2981 log.go:172] (0xc000b691e0) (0xc000752dc0) Stream added, broadcasting: 1\nI0517 00:24:46.671124 2981 log.go:172] (0xc000b691e0) Reply frame received for 1\nI0517 00:24:46.671168 2981 log.go:172] (0xc000b691e0) (0xc000717a40) Create stream\nI0517 00:24:46.671179 2981 log.go:172] (0xc000b691e0) (0xc000717a40) Stream added, broadcasting: 3\nI0517 00:24:46.672164 2981 log.go:172] (0xc000b691e0) Reply frame received for 3\nI0517 00:24:46.672215 2981 log.go:172] (0xc000b691e0) (0xc0006d0b40) Create stream\nI0517 00:24:46.672226 2981 log.go:172] (0xc000b691e0) (0xc0006d0b40) Stream added, broadcasting: 5\nI0517 00:24:46.673283 2981 log.go:172] (0xc000b691e0) Reply frame received for 5\nI0517 00:24:46.762844 2981 log.go:172] (0xc000b691e0) Data frame received for 5\nI0517 00:24:46.762887 2981 log.go:172] (0xc0006d0b40) (5) Data frame handling\nI0517 00:24:46.762933 2981 log.go:172] (0xc0006d0b40) (5) Data frame sent\nI0517 00:24:46.762956 2981 log.go:172] (0xc000b691e0) Data frame received for 5\nI0517 00:24:46.762974 2981 log.go:172] (0xc0006d0b40) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0517 00:24:46.763695 2981 log.go:172] (0xc000b691e0) Data frame received for 3\nI0517 00:24:46.763725 2981 log.go:172] (0xc000717a40) (3) Data frame handling\nI0517 00:24:46.764796 2981 log.go:172] (0xc000b691e0) Data frame received for 1\nI0517 00:24:46.764856 2981 log.go:172] (0xc000752dc0) (1) Data frame handling\nI0517 00:24:46.764874 2981 log.go:172] (0xc000752dc0) (1) Data frame sent\nI0517 00:24:46.765158 2981 log.go:172] (0xc000b691e0) (0xc000752dc0) Stream removed, broadcasting: 1\nI0517 00:24:46.765580 2981 log.go:172] (0xc000b691e0) (0xc000752dc0) Stream removed, broadcasting: 1\nI0517 00:24:46.765604 2981 log.go:172] (0xc000b691e0) (0xc000717a40) Stream removed, broadcasting: 3\nI0517 00:24:46.765784 2981 log.go:172] (0xc000b691e0) (0xc0006d0b40) Stream removed, broadcasting: 5\nI0517 00:24:46.765827 2981 log.go:172] (0xc000b691e0) Go away received\n" May 17 00:24:46.772: INFO: stdout: "" May 17 00:24:46.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8690 execpodnpgg9 -- /bin/sh -x -c nc -zv -t -w 2 10.98.29.249 80' May 17 00:24:46.988: INFO: stderr: "I0517 00:24:46.898581 3003 log.go:172] (0xc00068ebb0) (0xc0005c7d60) Create stream\nI0517 00:24:46.898637 3003 log.go:172] (0xc00068ebb0) (0xc0005c7d60) Stream added, broadcasting: 1\nI0517 00:24:46.910496 3003 log.go:172] (0xc00068ebb0) Reply frame received for 1\nI0517 00:24:46.910554 3003 log.go:172] (0xc00068ebb0) (0xc0001f8f00) Create stream\nI0517 00:24:46.910572 3003 log.go:172] (0xc00068ebb0) (0xc0001f8f00) Stream added, broadcasting: 3\nI0517 00:24:46.911354 3003 log.go:172] (0xc00068ebb0) Reply frame received for 3\nI0517 00:24:46.911382 3003 log.go:172] (0xc00068ebb0) (0xc0003270e0) Create stream\nI0517 00:24:46.911393 3003 log.go:172] (0xc00068ebb0) (0xc0003270e0) Stream added, broadcasting: 5\nI0517 00:24:46.912034 3003 log.go:172] (0xc00068ebb0) Reply frame received for 5\nI0517 00:24:46.982057 3003 log.go:172] (0xc00068ebb0) Data frame received for 3\nI0517 00:24:46.982084 3003 log.go:172] (0xc0001f8f00) (3) Data frame handling\nI0517 00:24:46.982104 3003 log.go:172] (0xc00068ebb0) Data frame received for 5\nI0517 00:24:46.982111 3003 log.go:172] (0xc0003270e0) (5) Data frame handling\nI0517 00:24:46.982119 3003 log.go:172] (0xc0003270e0) (5) Data frame sent\nI0517 00:24:46.982126 3003 log.go:172] (0xc00068ebb0) Data frame received for 5\nI0517 00:24:46.982131 3003 log.go:172] (0xc0003270e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.29.249 80\nConnection to 10.98.29.249 80 port [tcp/http] succeeded!\nI0517 00:24:46.983167 3003 log.go:172] (0xc00068ebb0) Data frame received for 1\nI0517 00:24:46.983236 3003 log.go:172] (0xc0005c7d60) (1) Data frame handling\nI0517 00:24:46.983276 3003 log.go:172] (0xc0005c7d60) (1) Data frame sent\nI0517 00:24:46.983324 3003 log.go:172] (0xc00068ebb0) (0xc0005c7d60) Stream removed, broadcasting: 1\nI0517 00:24:46.983365 3003 log.go:172] (0xc00068ebb0) Go away received\nI0517 00:24:46.983727 3003 log.go:172] (0xc00068ebb0) (0xc0005c7d60) Stream removed, broadcasting: 1\nI0517 00:24:46.983755 3003 log.go:172] (0xc00068ebb0) (0xc0001f8f00) Stream removed, broadcasting: 3\nI0517 00:24:46.983770 3003 log.go:172] (0xc00068ebb0) (0xc0003270e0) Stream removed, broadcasting: 5\n" May 17 00:24:46.988: INFO: stdout: "" May 17 00:24:46.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8690 execpodnpgg9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31764' May 17 00:24:47.199: INFO: stderr: "I0517 00:24:47.127618 3024 log.go:172] (0xc00098a160) (0xc0005435e0) Create stream\nI0517 00:24:47.127674 3024 log.go:172] (0xc00098a160) (0xc0005435e0) Stream added, broadcasting: 1\nI0517 00:24:47.130112 3024 log.go:172] (0xc00098a160) Reply frame received for 1\nI0517 00:24:47.130170 3024 log.go:172] (0xc00098a160) (0xc0004fae60) Create stream\nI0517 00:24:47.130190 3024 log.go:172] (0xc00098a160) (0xc0004fae60) Stream added, broadcasting: 3\nI0517 00:24:47.131520 3024 log.go:172] (0xc00098a160) Reply frame received for 3\nI0517 00:24:47.131563 3024 log.go:172] (0xc00098a160) (0xc000254140) Create stream\nI0517 00:24:47.131579 3024 log.go:172] (0xc00098a160) (0xc000254140) Stream added, broadcasting: 5\nI0517 00:24:47.132724 3024 log.go:172] (0xc00098a160) Reply frame received for 5\nI0517 00:24:47.192650 3024 log.go:172] (0xc00098a160) Data frame received for 3\nI0517 00:24:47.192692 3024 log.go:172] (0xc0004fae60) (3) Data frame handling\nI0517 00:24:47.192722 3024 log.go:172] (0xc00098a160) Data frame received for 5\nI0517 00:24:47.192735 3024 log.go:172] (0xc000254140) (5) Data frame handling\nI0517 00:24:47.192748 3024 log.go:172] (0xc000254140) (5) Data frame sent\nI0517 00:24:47.192758 3024 log.go:172] (0xc00098a160) Data frame received for 5\nI0517 00:24:47.192776 3024 log.go:172] (0xc000254140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31764\nConnection to 172.17.0.13 31764 port [tcp/31764] succeeded!\nI0517 00:24:47.194283 3024 log.go:172] (0xc00098a160) Data frame received for 1\nI0517 00:24:47.194333 3024 log.go:172] (0xc0005435e0) (1) Data frame handling\nI0517 00:24:47.194353 3024 log.go:172] (0xc0005435e0) (1) Data frame sent\nI0517 00:24:47.194372 3024 log.go:172] (0xc00098a160) (0xc0005435e0) Stream removed, broadcasting: 1\nI0517 00:24:47.194404 3024 log.go:172] (0xc00098a160) Go away received\nI0517 00:24:47.194803 3024 log.go:172] (0xc00098a160) (0xc0005435e0) Stream removed, broadcasting: 1\nI0517 00:24:47.194822 3024 log.go:172] (0xc00098a160) (0xc0004fae60) Stream removed, broadcasting: 3\nI0517 00:24:47.194836 3024 log.go:172] (0xc00098a160) (0xc000254140) Stream removed, broadcasting: 5\n" May 17 00:24:47.199: INFO: stdout: "" May 17 00:24:47.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8690 execpodnpgg9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31764' May 17 00:24:47.524: INFO: stderr: "I0517 00:24:47.336695 3042 log.go:172] (0xc00003a160) (0xc0009a7c20) Create stream\nI0517 00:24:47.336781 3042 log.go:172] (0xc00003a160) (0xc0009a7c20) Stream added, broadcasting: 1\nI0517 00:24:47.339381 3042 log.go:172] (0xc00003a160) Reply frame received for 1\nI0517 00:24:47.339415 3042 log.go:172] (0xc00003a160) (0xc00098e140) Create stream\nI0517 00:24:47.339427 3042 log.go:172] (0xc00003a160) (0xc00098e140) Stream added, broadcasting: 3\nI0517 00:24:47.340146 3042 log.go:172] (0xc00003a160) Reply frame received for 3\nI0517 00:24:47.340170 3042 log.go:172] (0xc00003a160) (0xc000982780) Create stream\nI0517 00:24:47.340182 3042 log.go:172] (0xc00003a160) (0xc000982780) Stream added, broadcasting: 5\nI0517 00:24:47.340978 3042 log.go:172] (0xc00003a160) Reply frame received for 5\nI0517 00:24:47.519166 3042 log.go:172] (0xc00003a160) Data frame received for 3\nI0517 00:24:47.519188 3042 log.go:172] (0xc00098e140) (3) Data frame handling\nI0517 00:24:47.519398 3042 log.go:172] (0xc00003a160) Data frame received for 5\nI0517 00:24:47.519408 3042 log.go:172] (0xc000982780) (5) Data frame handling\nI0517 00:24:47.519425 3042 log.go:172] (0xc000982780) (5) Data frame sent\nI0517 00:24:47.519436 3042 log.go:172] (0xc00003a160) Data frame received for 5\nI0517 00:24:47.519441 3042 log.go:172] (0xc000982780) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31764\nConnection to 172.17.0.12 31764 port [tcp/31764] succeeded!\nI0517 00:24:47.520945 3042 log.go:172] (0xc00003a160) Data frame received for 1\nI0517 00:24:47.520961 3042 log.go:172] (0xc0009a7c20) (1) Data frame handling\nI0517 00:24:47.520966 3042 log.go:172] (0xc0009a7c20) (1) Data frame sent\nI0517 00:24:47.520972 3042 log.go:172] (0xc00003a160) (0xc0009a7c20) Stream removed, broadcasting: 1\nI0517 00:24:47.521003 3042 log.go:172] (0xc00003a160) Go away received\nI0517 00:24:47.521358 3042 log.go:172] (0xc00003a160) (0xc0009a7c20) Stream removed, broadcasting: 1\nI0517 00:24:47.521368 3042 log.go:172] (0xc00003a160) (0xc00098e140) Stream removed, broadcasting: 3\nI0517 00:24:47.521373 3042 log.go:172] (0xc00003a160) (0xc000982780) Stream removed, broadcasting: 5\n" May 17 00:24:47.524: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:24:47.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8690" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.347 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":109,"skipped":1897,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:24:47.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0517 00:25:28.460272 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 00:25:28.460: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:25:28.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5943" for this suite. • [SLOW TEST:40.932 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":110,"skipped":1910,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:25:28.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:25:29.102: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:25:31.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271929, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271929, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271929, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271929, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:25:34.171: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:25:38.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1112" for this suite. STEP: Destroying namespace "webhook-1112-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.369 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":111,"skipped":1917,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:25:38.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:25:40.250: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:25:42.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271940, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271940, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271940, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271940, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 00:25:44.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271940, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271940, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271940, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725271940, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:25:47.322: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:25:47.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4440" for this suite. STEP: Destroying namespace "webhook-4440-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.782 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":112,"skipped":1947,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:25:47.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 17 00:25:47.800: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:25:55.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4502" for this suite. • [SLOW TEST:8.040 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":1952,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:25:55.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:25:55.740: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 17 00:25:58.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6270 create -f -' May 17 00:26:01.929: INFO: stderr: "" May 17 00:26:01.929: INFO: stdout: "e2e-test-crd-publish-openapi-1206-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 17 00:26:01.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6270 delete e2e-test-crd-publish-openapi-1206-crds test-cr' May 17 00:26:02.036: INFO: stderr: "" May 17 00:26:02.036: INFO: stdout: "e2e-test-crd-publish-openapi-1206-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 17 00:26:02.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6270 apply -f -' May 17 00:26:02.296: INFO: stderr: "" May 17 00:26:02.296: INFO: stdout: "e2e-test-crd-publish-openapi-1206-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 17 00:26:02.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6270 delete e2e-test-crd-publish-openapi-1206-crds test-cr' May 17 00:26:02.424: INFO: stderr: "" May 17 00:26:02.424: INFO: stdout: "e2e-test-crd-publish-openapi-1206-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 17 00:26:02.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1206-crds' May 17 00:26:02.695: INFO: stderr: "" May 17 00:26:02.695: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1206-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:26:05.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6270" for this suite. • [SLOW TEST:9.973 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":114,"skipped":1958,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:26:05.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 17 00:26:09.780: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:26:09.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9131" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":1973,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:26:09.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2614 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2614 STEP: Creating statefulset with conflicting port in namespace statefulset-2614 STEP: Waiting until pod test-pod will start running in namespace statefulset-2614 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2614 May 17 00:26:16.305: INFO: Observed stateful pod in namespace: statefulset-2614, name: ss-0, uid: b4964789-cd7d-4ba8-907a-9cddba086864, status phase: Pending. Waiting for statefulset controller to delete. May 17 00:26:16.454: INFO: Observed stateful pod in namespace: statefulset-2614, name: ss-0, uid: b4964789-cd7d-4ba8-907a-9cddba086864, status phase: Failed. Waiting for statefulset controller to delete. May 17 00:26:16.473: INFO: Observed stateful pod in namespace: statefulset-2614, name: ss-0, uid: b4964789-cd7d-4ba8-907a-9cddba086864, status phase: Failed. Waiting for statefulset controller to delete. May 17 00:26:16.479: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2614 STEP: Removing pod with conflicting port in namespace statefulset-2614 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2614 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 17 00:26:22.581: INFO: Deleting all statefulset in ns statefulset-2614 May 17 00:26:22.584: INFO: Scaling statefulset ss to 0 May 17 00:26:42.623: INFO: Waiting for statefulset status.replicas updated to 0 May 17 00:26:42.626: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:26:42.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2614" for this suite. • [SLOW TEST:32.799 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":116,"skipped":2014,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:26:42.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:27:13.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9749" for this suite. • [SLOW TEST:31.090 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":117,"skipped":2015,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:27:13.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 17 00:27:13.828: INFO: Waiting up to 5m0s for pod "downward-api-acea6b38-339e-4b97-b853-581ed95bc42a" in namespace "downward-api-6254" to be "Succeeded or Failed" May 17 00:27:13.838: INFO: Pod "downward-api-acea6b38-339e-4b97-b853-581ed95bc42a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064782ms May 17 00:27:15.843: INFO: Pod "downward-api-acea6b38-339e-4b97-b853-581ed95bc42a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014885763s May 17 00:27:17.874: INFO: Pod "downward-api-acea6b38-339e-4b97-b853-581ed95bc42a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045519225s STEP: Saw pod success May 17 00:27:17.874: INFO: Pod "downward-api-acea6b38-339e-4b97-b853-581ed95bc42a" satisfied condition "Succeeded or Failed" May 17 00:27:17.877: INFO: Trying to get logs from node latest-worker pod downward-api-acea6b38-339e-4b97-b853-581ed95bc42a container dapi-container: STEP: delete the pod May 17 00:27:17.929: INFO: Waiting for pod downward-api-acea6b38-339e-4b97-b853-581ed95bc42a to disappear May 17 00:27:17.957: INFO: Pod downward-api-acea6b38-339e-4b97-b853-581ed95bc42a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:27:17.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6254" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":118,"skipped":2033,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:27:17.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:27:18.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-413" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":119,"skipped":2041,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:27:18.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 17 00:27:18.529: INFO: Waiting up to 5m0s for pod "pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a" in namespace "emptydir-8342" to be "Succeeded or Failed" May 17 00:27:18.534: INFO: Pod "pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178685ms May 17 00:27:20.536: INFO: Pod "pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006915718s May 17 00:27:22.540: INFO: Pod "pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a": Phase="Running", Reason="", readiness=true. Elapsed: 4.010312589s May 17 00:27:24.544: INFO: Pod "pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014232031s STEP: Saw pod success May 17 00:27:24.544: INFO: Pod "pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a" satisfied condition "Succeeded or Failed" May 17 00:27:24.546: INFO: Trying to get logs from node latest-worker2 pod pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a container test-container: STEP: delete the pod May 17 00:27:24.660: INFO: Waiting for pod pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a to disappear May 17 00:27:24.666: INFO: Pod pod-a926cf5f-fd04-4c5f-9215-6e26b976aa2a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:27:24.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8342" for this suite. • [SLOW TEST:6.242 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":2075,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:27:24.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e8e7bf03-eb7e-40d1-9176-dacbedd1da1c STEP: Creating a pod to test consume configMaps May 17 00:27:24.740: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-954b9eba-815d-4094-9cb9-745195a8134d" in namespace "projected-5445" to be "Succeeded or Failed" May 17 00:27:24.743: INFO: Pod "pod-projected-configmaps-954b9eba-815d-4094-9cb9-745195a8134d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.03763ms May 17 00:27:26.749: INFO: Pod "pod-projected-configmaps-954b9eba-815d-4094-9cb9-745195a8134d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009091471s May 17 00:27:28.752: INFO: Pod "pod-projected-configmaps-954b9eba-815d-4094-9cb9-745195a8134d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012184462s STEP: Saw pod success May 17 00:27:28.752: INFO: Pod "pod-projected-configmaps-954b9eba-815d-4094-9cb9-745195a8134d" satisfied condition "Succeeded or Failed" May 17 00:27:28.755: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-954b9eba-815d-4094-9cb9-745195a8134d container projected-configmap-volume-test: STEP: delete the pod May 17 00:27:28.802: INFO: Waiting for pod pod-projected-configmaps-954b9eba-815d-4094-9cb9-745195a8134d to disappear May 17 00:27:28.819: INFO: Pod pod-projected-configmaps-954b9eba-815d-4094-9cb9-745195a8134d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:27:28.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5445" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":2084,"failed":0} SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:27:28.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 17 00:27:39.060: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.060: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:39.097908 7 log.go:172] (0xc0024f2c60) (0xc002184d20) Create stream I0517 00:27:39.097935 7 log.go:172] (0xc0024f2c60) (0xc002184d20) Stream added, broadcasting: 1 I0517 00:27:39.100105 7 log.go:172] (0xc0024f2c60) Reply frame received for 1 I0517 00:27:39.100147 7 log.go:172] (0xc0024f2c60) (0xc0012a8fa0) Create stream I0517 00:27:39.100170 7 log.go:172] (0xc0024f2c60) (0xc0012a8fa0) Stream added, broadcasting: 3 I0517 00:27:39.101529 7 log.go:172] (0xc0024f2c60) Reply frame received for 3 I0517 00:27:39.101569 7 log.go:172] (0xc0024f2c60) (0xc001f76000) Create stream I0517 00:27:39.101581 7 log.go:172] (0xc0024f2c60) (0xc001f76000) Stream added, broadcasting: 5 I0517 00:27:39.102733 7 log.go:172] (0xc0024f2c60) Reply frame received for 5 I0517 00:27:39.179490 7 log.go:172] (0xc0024f2c60) Data frame received for 5 I0517 00:27:39.179512 7 log.go:172] (0xc001f76000) (5) Data frame handling I0517 00:27:39.179538 7 log.go:172] (0xc0024f2c60) Data frame received for 3 I0517 00:27:39.179569 7 log.go:172] (0xc0012a8fa0) (3) Data frame handling I0517 00:27:39.179607 7 log.go:172] (0xc0012a8fa0) (3) Data frame sent I0517 00:27:39.179628 7 log.go:172] (0xc0024f2c60) Data frame received for 3 I0517 00:27:39.179640 7 log.go:172] (0xc0012a8fa0) (3) Data frame handling I0517 00:27:39.181591 7 log.go:172] (0xc0024f2c60) Data frame received for 1 I0517 00:27:39.181626 7 log.go:172] (0xc002184d20) (1) Data frame handling I0517 00:27:39.181656 7 log.go:172] (0xc002184d20) (1) Data frame sent I0517 00:27:39.181695 7 log.go:172] (0xc0024f2c60) (0xc002184d20) Stream removed, broadcasting: 1 I0517 00:27:39.181729 7 log.go:172] (0xc0024f2c60) Go away received I0517 00:27:39.181856 7 log.go:172] (0xc0024f2c60) (0xc002184d20) Stream removed, broadcasting: 1 I0517 00:27:39.181871 7 log.go:172] (0xc0024f2c60) (0xc0012a8fa0) Stream removed, broadcasting: 3 I0517 00:27:39.181879 7 log.go:172] (0xc0024f2c60) (0xc001f76000) Stream removed, broadcasting: 5 May 17 00:27:39.181: INFO: Exec stderr: "" May 17 00:27:39.181: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.181: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:39.209345 7 log.go:172] (0xc0017f7080) (0xc0020dc140) Create stream I0517 00:27:39.209382 7 log.go:172] (0xc0017f7080) (0xc0020dc140) Stream added, broadcasting: 1 I0517 00:27:39.211444 7 log.go:172] (0xc0017f7080) Reply frame received for 1 I0517 00:27:39.211482 7 log.go:172] (0xc0017f7080) (0xc0012a9180) Create stream I0517 00:27:39.211497 7 log.go:172] (0xc0017f7080) (0xc0012a9180) Stream added, broadcasting: 3 I0517 00:27:39.212307 7 log.go:172] (0xc0017f7080) Reply frame received for 3 I0517 00:27:39.212333 7 log.go:172] (0xc0017f7080) (0xc002184dc0) Create stream I0517 00:27:39.212340 7 log.go:172] (0xc0017f7080) (0xc002184dc0) Stream added, broadcasting: 5 I0517 00:27:39.213297 7 log.go:172] (0xc0017f7080) Reply frame received for 5 I0517 00:27:39.282986 7 log.go:172] (0xc0017f7080) Data frame received for 3 I0517 00:27:39.283028 7 log.go:172] (0xc0012a9180) (3) Data frame handling I0517 00:27:39.283045 7 log.go:172] (0xc0012a9180) (3) Data frame sent I0517 00:27:39.283055 7 log.go:172] (0xc0017f7080) Data frame received for 3 I0517 00:27:39.283062 7 log.go:172] (0xc0012a9180) (3) Data frame handling I0517 00:27:39.283079 7 log.go:172] (0xc0017f7080) Data frame received for 5 I0517 00:27:39.283086 7 log.go:172] (0xc002184dc0) (5) Data frame handling I0517 00:27:39.283814 7 log.go:172] (0xc0017f7080) Data frame received for 1 I0517 00:27:39.283826 7 log.go:172] (0xc0020dc140) (1) Data frame handling I0517 00:27:39.283832 7 log.go:172] (0xc0020dc140) (1) Data frame sent I0517 00:27:39.283838 7 log.go:172] (0xc0017f7080) (0xc0020dc140) Stream removed, broadcasting: 1 I0517 00:27:39.283871 7 log.go:172] (0xc0017f7080) Go away received I0517 00:27:39.283903 7 log.go:172] (0xc0017f7080) (0xc0020dc140) Stream removed, broadcasting: 1 I0517 00:27:39.283916 7 log.go:172] (0xc0017f7080) (0xc0012a9180) Stream removed, broadcasting: 3 I0517 00:27:39.283929 7 log.go:172] (0xc0017f7080) (0xc002184dc0) Stream removed, broadcasting: 5 May 17 00:27:39.283: INFO: Exec stderr: "" May 17 00:27:39.283: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.283: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:39.324364 7 log.go:172] (0xc0017f76b0) (0xc0020dc500) Create stream I0517 00:27:39.324391 7 log.go:172] (0xc0017f76b0) (0xc0020dc500) Stream added, broadcasting: 1 I0517 00:27:39.325861 7 log.go:172] (0xc0017f76b0) Reply frame received for 1 I0517 00:27:39.325885 7 log.go:172] (0xc0017f76b0) (0xc0012a92c0) Create stream I0517 00:27:39.325894 7 log.go:172] (0xc0017f76b0) (0xc0012a92c0) Stream added, broadcasting: 3 I0517 00:27:39.326448 7 log.go:172] (0xc0017f76b0) Reply frame received for 3 I0517 00:27:39.326467 7 log.go:172] (0xc0017f76b0) (0xc002184e60) Create stream I0517 00:27:39.326474 7 log.go:172] (0xc0017f76b0) (0xc002184e60) Stream added, broadcasting: 5 I0517 00:27:39.326952 7 log.go:172] (0xc0017f76b0) Reply frame received for 5 I0517 00:27:39.377371 7 log.go:172] (0xc0017f76b0) Data frame received for 5 I0517 00:27:39.377418 7 log.go:172] (0xc002184e60) (5) Data frame handling I0517 00:27:39.377450 7 log.go:172] (0xc0017f76b0) Data frame received for 3 I0517 00:27:39.377480 7 log.go:172] (0xc0012a92c0) (3) Data frame handling I0517 00:27:39.377516 7 log.go:172] (0xc0012a92c0) (3) Data frame sent I0517 00:27:39.377532 7 log.go:172] (0xc0017f76b0) Data frame received for 3 I0517 00:27:39.377546 7 log.go:172] (0xc0012a92c0) (3) Data frame handling I0517 00:27:39.379354 7 log.go:172] (0xc0017f76b0) Data frame received for 1 I0517 00:27:39.379376 7 log.go:172] (0xc0020dc500) (1) Data frame handling I0517 00:27:39.379405 7 log.go:172] (0xc0020dc500) (1) Data frame sent I0517 00:27:39.379421 7 log.go:172] (0xc0017f76b0) (0xc0020dc500) Stream removed, broadcasting: 1 I0517 00:27:39.379435 7 log.go:172] (0xc0017f76b0) Go away received I0517 00:27:39.379539 7 log.go:172] (0xc0017f76b0) (0xc0020dc500) Stream removed, broadcasting: 1 I0517 00:27:39.379587 7 log.go:172] (0xc0017f76b0) (0xc0012a92c0) Stream removed, broadcasting: 3 I0517 00:27:39.379606 7 log.go:172] (0xc0017f76b0) (0xc002184e60) Stream removed, broadcasting: 5 May 17 00:27:39.379: INFO: Exec stderr: "" May 17 00:27:39.379: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.379: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:39.404034 7 log.go:172] (0xc0017f7ce0) (0xc0020dc820) Create stream I0517 00:27:39.404063 7 log.go:172] (0xc0017f7ce0) (0xc0020dc820) Stream added, broadcasting: 1 I0517 00:27:39.406189 7 log.go:172] (0xc0017f7ce0) Reply frame received for 1 I0517 00:27:39.406219 7 log.go:172] (0xc0017f7ce0) (0xc002184f00) Create stream I0517 00:27:39.406231 7 log.go:172] (0xc0017f7ce0) (0xc002184f00) Stream added, broadcasting: 3 I0517 00:27:39.407184 7 log.go:172] (0xc0017f7ce0) Reply frame received for 3 I0517 00:27:39.407212 7 log.go:172] (0xc0017f7ce0) (0xc001f760a0) Create stream I0517 00:27:39.407223 7 log.go:172] (0xc0017f7ce0) (0xc001f760a0) Stream added, broadcasting: 5 I0517 00:27:39.408135 7 log.go:172] (0xc0017f7ce0) Reply frame received for 5 I0517 00:27:39.481088 7 log.go:172] (0xc0017f7ce0) Data frame received for 3 I0517 00:27:39.481334 7 log.go:172] (0xc002184f00) (3) Data frame handling I0517 00:27:39.481360 7 log.go:172] (0xc002184f00) (3) Data frame sent I0517 00:27:39.481389 7 log.go:172] (0xc0017f7ce0) Data frame received for 3 I0517 00:27:39.481416 7 log.go:172] (0xc002184f00) (3) Data frame handling I0517 00:27:39.481451 7 log.go:172] (0xc0017f7ce0) Data frame received for 5 I0517 00:27:39.481474 7 log.go:172] (0xc001f760a0) (5) Data frame handling I0517 00:27:39.482761 7 log.go:172] (0xc0017f7ce0) Data frame received for 1 I0517 00:27:39.482788 7 log.go:172] (0xc0020dc820) (1) Data frame handling I0517 00:27:39.482804 7 log.go:172] (0xc0020dc820) (1) Data frame sent I0517 00:27:39.482824 7 log.go:172] (0xc0017f7ce0) (0xc0020dc820) Stream removed, broadcasting: 1 I0517 00:27:39.482847 7 log.go:172] (0xc0017f7ce0) Go away received I0517 00:27:39.483002 7 log.go:172] (0xc0017f7ce0) (0xc0020dc820) Stream removed, broadcasting: 1 I0517 00:27:39.483026 7 log.go:172] (0xc0017f7ce0) (0xc002184f00) Stream removed, broadcasting: 3 I0517 00:27:39.483038 7 log.go:172] (0xc0017f7ce0) (0xc001f760a0) Stream removed, broadcasting: 5 May 17 00:27:39.483: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 17 00:27:39.483: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.483: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:39.546588 7 log.go:172] (0xc0024f3290) (0xc0021850e0) Create stream I0517 00:27:39.546647 7 log.go:172] (0xc0024f3290) (0xc0021850e0) Stream added, broadcasting: 1 I0517 00:27:39.548925 7 log.go:172] (0xc0024f3290) Reply frame received for 1 I0517 00:27:39.548968 7 log.go:172] (0xc0024f3290) (0xc002b36be0) Create stream I0517 00:27:39.548984 7 log.go:172] (0xc0024f3290) (0xc002b36be0) Stream added, broadcasting: 3 I0517 00:27:39.550197 7 log.go:172] (0xc0024f3290) Reply frame received for 3 I0517 00:27:39.550252 7 log.go:172] (0xc0024f3290) (0xc001f76140) Create stream I0517 00:27:39.550274 7 log.go:172] (0xc0024f3290) (0xc001f76140) Stream added, broadcasting: 5 I0517 00:27:39.551203 7 log.go:172] (0xc0024f3290) Reply frame received for 5 I0517 00:27:39.627535 7 log.go:172] (0xc0024f3290) Data frame received for 5 I0517 00:27:39.627563 7 log.go:172] (0xc001f76140) (5) Data frame handling I0517 00:27:39.627587 7 log.go:172] (0xc0024f3290) Data frame received for 3 I0517 00:27:39.627601 7 log.go:172] (0xc002b36be0) (3) Data frame handling I0517 00:27:39.627614 7 log.go:172] (0xc002b36be0) (3) Data frame sent I0517 00:27:39.627623 7 log.go:172] (0xc0024f3290) Data frame received for 3 I0517 00:27:39.627652 7 log.go:172] (0xc002b36be0) (3) Data frame handling I0517 00:27:39.629103 7 log.go:172] (0xc0024f3290) Data frame received for 1 I0517 00:27:39.629265 7 log.go:172] (0xc0021850e0) (1) Data frame handling I0517 00:27:39.629282 7 log.go:172] (0xc0021850e0) (1) Data frame sent I0517 00:27:39.629298 7 log.go:172] (0xc0024f3290) (0xc0021850e0) Stream removed, broadcasting: 1 I0517 00:27:39.629396 7 log.go:172] (0xc0024f3290) (0xc0021850e0) Stream removed, broadcasting: 1 I0517 00:27:39.629425 7 log.go:172] (0xc0024f3290) (0xc002b36be0) Stream removed, broadcasting: 3 I0517 00:27:39.629438 7 log.go:172] (0xc0024f3290) (0xc001f76140) Stream removed, broadcasting: 5 May 17 00:27:39.629: INFO: Exec stderr: "" May 17 00:27:39.629: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.629: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:39.629707 7 log.go:172] (0xc0024f3290) Go away received I0517 00:27:39.685800 7 log.go:172] (0xc001eceb00) (0xc002b36e60) Create stream I0517 00:27:39.685838 7 log.go:172] (0xc001eceb00) (0xc002b36e60) Stream added, broadcasting: 1 I0517 00:27:39.696489 7 log.go:172] (0xc001eceb00) Reply frame received for 1 I0517 00:27:39.696549 7 log.go:172] (0xc001eceb00) (0xc0012a9400) Create stream I0517 00:27:39.696565 7 log.go:172] (0xc001eceb00) (0xc0012a9400) Stream added, broadcasting: 3 I0517 00:27:39.698330 7 log.go:172] (0xc001eceb00) Reply frame received for 3 I0517 00:27:39.698374 7 log.go:172] (0xc001eceb00) (0xc0020dc960) Create stream I0517 00:27:39.698393 7 log.go:172] (0xc001eceb00) (0xc0020dc960) Stream added, broadcasting: 5 I0517 00:27:39.700312 7 log.go:172] (0xc001eceb00) Reply frame received for 5 I0517 00:27:39.750306 7 log.go:172] (0xc001eceb00) Data frame received for 3 I0517 00:27:39.750340 7 log.go:172] (0xc0012a9400) (3) Data frame handling I0517 00:27:39.750359 7 log.go:172] (0xc0012a9400) (3) Data frame sent I0517 00:27:39.750370 7 log.go:172] (0xc001eceb00) Data frame received for 3 I0517 00:27:39.750379 7 log.go:172] (0xc0012a9400) (3) Data frame handling I0517 00:27:39.750424 7 log.go:172] (0xc001eceb00) Data frame received for 5 I0517 00:27:39.750450 7 log.go:172] (0xc0020dc960) (5) Data frame handling I0517 00:27:39.752038 7 log.go:172] (0xc001eceb00) Data frame received for 1 I0517 00:27:39.752061 7 log.go:172] (0xc002b36e60) (1) Data frame handling I0517 00:27:39.752087 7 log.go:172] (0xc002b36e60) (1) Data frame sent I0517 00:27:39.752109 7 log.go:172] (0xc001eceb00) (0xc002b36e60) Stream removed, broadcasting: 1 I0517 00:27:39.752208 7 log.go:172] (0xc001eceb00) (0xc002b36e60) Stream removed, broadcasting: 1 I0517 00:27:39.752236 7 log.go:172] (0xc001eceb00) (0xc0012a9400) Stream removed, broadcasting: 3 I0517 00:27:39.752310 7 log.go:172] (0xc001eceb00) Go away received I0517 00:27:39.752435 7 log.go:172] (0xc001eceb00) (0xc0020dc960) Stream removed, broadcasting: 5 May 17 00:27:39.752: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 17 00:27:39.752: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.752: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:39.791331 7 log.go:172] (0xc002cac630) (0xc0012a9860) Create stream I0517 00:27:39.791363 7 log.go:172] (0xc002cac630) (0xc0012a9860) Stream added, broadcasting: 1 I0517 00:27:39.793552 7 log.go:172] (0xc002cac630) Reply frame received for 1 I0517 00:27:39.793606 7 log.go:172] (0xc002cac630) (0xc0012a9b80) Create stream I0517 00:27:39.793623 7 log.go:172] (0xc002cac630) (0xc0012a9b80) Stream added, broadcasting: 3 I0517 00:27:39.794613 7 log.go:172] (0xc002cac630) Reply frame received for 3 I0517 00:27:39.794649 7 log.go:172] (0xc002cac630) (0xc001f76320) Create stream I0517 00:27:39.794663 7 log.go:172] (0xc002cac630) (0xc001f76320) Stream added, broadcasting: 5 I0517 00:27:39.795538 7 log.go:172] (0xc002cac630) Reply frame received for 5 I0517 00:27:39.856368 7 log.go:172] (0xc002cac630) Data frame received for 3 I0517 00:27:39.856422 7 log.go:172] (0xc0012a9b80) (3) Data frame handling I0517 00:27:39.856455 7 log.go:172] (0xc0012a9b80) (3) Data frame sent I0517 00:27:39.856470 7 log.go:172] (0xc002cac630) Data frame received for 3 I0517 00:27:39.856480 7 log.go:172] (0xc0012a9b80) (3) Data frame handling I0517 00:27:39.856495 7 log.go:172] (0xc002cac630) Data frame received for 5 I0517 00:27:39.856505 7 log.go:172] (0xc001f76320) (5) Data frame handling I0517 00:27:39.858387 7 log.go:172] (0xc002cac630) Data frame received for 1 I0517 00:27:39.858424 7 log.go:172] (0xc0012a9860) (1) Data frame handling I0517 00:27:39.858459 7 log.go:172] (0xc0012a9860) (1) Data frame sent I0517 00:27:39.858518 7 log.go:172] (0xc002cac630) (0xc0012a9860) Stream removed, broadcasting: 1 I0517 00:27:39.858549 7 log.go:172] (0xc002cac630) Go away received I0517 00:27:39.858633 7 log.go:172] (0xc002cac630) (0xc0012a9860) Stream removed, broadcasting: 1 I0517 00:27:39.858663 7 log.go:172] (0xc002cac630) (0xc0012a9b80) Stream removed, broadcasting: 3 I0517 00:27:39.858678 7 log.go:172] (0xc002cac630) (0xc001f76320) Stream removed, broadcasting: 5 May 17 00:27:39.858: INFO: Exec stderr: "" May 17 00:27:39.858: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.858: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:39.891208 7 log.go:172] (0xc001ecf130) (0xc002b370e0) Create stream I0517 00:27:39.891233 7 log.go:172] (0xc001ecf130) (0xc002b370e0) Stream added, broadcasting: 1 I0517 00:27:39.893790 7 log.go:172] (0xc001ecf130) Reply frame received for 1 I0517 00:27:39.893830 7 log.go:172] (0xc001ecf130) (0xc002185220) Create stream I0517 00:27:39.893841 7 log.go:172] (0xc001ecf130) (0xc002185220) Stream added, broadcasting: 3 I0517 00:27:39.895013 7 log.go:172] (0xc001ecf130) Reply frame received for 3 I0517 00:27:39.895112 7 log.go:172] (0xc001ecf130) (0xc001f76460) Create stream I0517 00:27:39.895136 7 log.go:172] (0xc001ecf130) (0xc001f76460) Stream added, broadcasting: 5 I0517 00:27:39.896264 7 log.go:172] (0xc001ecf130) Reply frame received for 5 I0517 00:27:39.970084 7 log.go:172] (0xc001ecf130) Data frame received for 5 I0517 00:27:39.970122 7 log.go:172] (0xc001f76460) (5) Data frame handling I0517 00:27:39.970145 7 log.go:172] (0xc001ecf130) Data frame received for 3 I0517 00:27:39.970158 7 log.go:172] (0xc002185220) (3) Data frame handling I0517 00:27:39.970173 7 log.go:172] (0xc002185220) (3) Data frame sent I0517 00:27:39.970185 7 log.go:172] (0xc001ecf130) Data frame received for 3 I0517 00:27:39.970199 7 log.go:172] (0xc002185220) (3) Data frame handling I0517 00:27:39.971160 7 log.go:172] (0xc001ecf130) Data frame received for 1 I0517 00:27:39.971201 7 log.go:172] (0xc002b370e0) (1) Data frame handling I0517 00:27:39.971230 7 log.go:172] (0xc002b370e0) (1) Data frame sent I0517 00:27:39.971250 7 log.go:172] (0xc001ecf130) (0xc002b370e0) Stream removed, broadcasting: 1 I0517 00:27:39.971269 7 log.go:172] (0xc001ecf130) Go away received I0517 00:27:39.971415 7 log.go:172] (0xc001ecf130) (0xc002b370e0) Stream removed, broadcasting: 1 I0517 00:27:39.971443 7 log.go:172] (0xc001ecf130) (0xc002185220) Stream removed, broadcasting: 3 I0517 00:27:39.971460 7 log.go:172] (0xc001ecf130) (0xc001f76460) Stream removed, broadcasting: 5 May 17 00:27:39.971: INFO: Exec stderr: "" May 17 00:27:39.971: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:39.971: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:40.004287 7 log.go:172] (0xc001d3d130) (0xc001f76820) Create stream I0517 00:27:40.004326 7 log.go:172] (0xc001d3d130) (0xc001f76820) Stream added, broadcasting: 1 I0517 00:27:40.007292 7 log.go:172] (0xc001d3d130) Reply frame received for 1 I0517 00:27:40.007439 7 log.go:172] (0xc001d3d130) (0xc001f768c0) Create stream I0517 00:27:40.007479 7 log.go:172] (0xc001d3d130) (0xc001f768c0) Stream added, broadcasting: 3 I0517 00:27:40.008455 7 log.go:172] (0xc001d3d130) Reply frame received for 3 I0517 00:27:40.008499 7 log.go:172] (0xc001d3d130) (0xc0020dca00) Create stream I0517 00:27:40.008516 7 log.go:172] (0xc001d3d130) (0xc0020dca00) Stream added, broadcasting: 5 I0517 00:27:40.009503 7 log.go:172] (0xc001d3d130) Reply frame received for 5 I0517 00:27:40.065495 7 log.go:172] (0xc001d3d130) Data frame received for 3 I0517 00:27:40.065550 7 log.go:172] (0xc001f768c0) (3) Data frame handling I0517 00:27:40.065574 7 log.go:172] (0xc001f768c0) (3) Data frame sent I0517 00:27:40.065597 7 log.go:172] (0xc001d3d130) Data frame received for 3 I0517 00:27:40.065630 7 log.go:172] (0xc001f768c0) (3) Data frame handling I0517 00:27:40.065658 7 log.go:172] (0xc001d3d130) Data frame received for 5 I0517 00:27:40.065679 7 log.go:172] (0xc0020dca00) (5) Data frame handling I0517 00:27:40.067048 7 log.go:172] (0xc001d3d130) Data frame received for 1 I0517 00:27:40.067077 7 log.go:172] (0xc001f76820) (1) Data frame handling I0517 00:27:40.067099 7 log.go:172] (0xc001f76820) (1) Data frame sent I0517 00:27:40.067115 7 log.go:172] (0xc001d3d130) (0xc001f76820) Stream removed, broadcasting: 1 I0517 00:27:40.067214 7 log.go:172] (0xc001d3d130) (0xc001f76820) Stream removed, broadcasting: 1 I0517 00:27:40.067239 7 log.go:172] (0xc001d3d130) (0xc001f768c0) Stream removed, broadcasting: 3 I0517 00:27:40.067315 7 log.go:172] (0xc001d3d130) Go away received I0517 00:27:40.067442 7 log.go:172] (0xc001d3d130) (0xc0020dca00) Stream removed, broadcasting: 5 May 17 00:27:40.067: INFO: Exec stderr: "" May 17 00:27:40.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7242 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:27:40.067: INFO: >>> kubeConfig: /root/.kube/config I0517 00:27:40.101406 7 log.go:172] (0xc0024f38c0) (0xc0021854a0) Create stream I0517 00:27:40.101439 7 log.go:172] (0xc0024f38c0) (0xc0021854a0) Stream added, broadcasting: 1 I0517 00:27:40.103973 7 log.go:172] (0xc0024f38c0) Reply frame received for 1 I0517 00:27:40.104023 7 log.go:172] (0xc0024f38c0) (0xc002185540) Create stream I0517 00:27:40.104039 7 log.go:172] (0xc0024f38c0) (0xc002185540) Stream added, broadcasting: 3 I0517 00:27:40.105078 7 log.go:172] (0xc0024f38c0) Reply frame received for 3 I0517 00:27:40.105283 7 log.go:172] (0xc0024f38c0) (0xc002b372c0) Create stream I0517 00:27:40.105309 7 log.go:172] (0xc0024f38c0) (0xc002b372c0) Stream added, broadcasting: 5 I0517 00:27:40.106399 7 log.go:172] (0xc0024f38c0) Reply frame received for 5 I0517 00:27:40.166636 7 log.go:172] (0xc0024f38c0) Data frame received for 5 I0517 00:27:40.166668 7 log.go:172] (0xc002b372c0) (5) Data frame handling I0517 00:27:40.166696 7 log.go:172] (0xc0024f38c0) Data frame received for 3 I0517 00:27:40.166734 7 log.go:172] (0xc002185540) (3) Data frame handling I0517 00:27:40.166750 7 log.go:172] (0xc002185540) (3) Data frame sent I0517 00:27:40.166760 7 log.go:172] (0xc0024f38c0) Data frame received for 3 I0517 00:27:40.166767 7 log.go:172] (0xc002185540) (3) Data frame handling I0517 00:27:40.168186 7 log.go:172] (0xc0024f38c0) Data frame received for 1 I0517 00:27:40.168213 7 log.go:172] (0xc0021854a0) (1) Data frame handling I0517 00:27:40.168240 7 log.go:172] (0xc0021854a0) (1) Data frame sent I0517 00:27:40.168262 7 log.go:172] (0xc0024f38c0) (0xc0021854a0) Stream removed, broadcasting: 1 I0517 00:27:40.168305 7 log.go:172] (0xc0024f38c0) Go away received I0517 00:27:40.168370 7 log.go:172] (0xc0024f38c0) (0xc0021854a0) Stream removed, broadcasting: 1 I0517 00:27:40.168392 7 log.go:172] (0xc0024f38c0) (0xc002185540) Stream removed, broadcasting: 3 I0517 00:27:40.168408 7 log.go:172] (0xc0024f38c0) (0xc002b372c0) Stream removed, broadcasting: 5 May 17 00:27:40.168: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:27:40.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7242" for this suite. • [SLOW TEST:11.348 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":122,"skipped":2089,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:27:40.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8895.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8895.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 00:27:46.269: INFO: DNS probes using dns-test-4adcfe1f-2c03-4db8-b1fe-81eec5552f6c succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8895.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8895.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 00:27:52.432: INFO: File wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:27:52.435: INFO: File jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:27:52.435: INFO: Lookups using dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 failed for: [wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local] May 17 00:27:57.439: INFO: File wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:27:57.443: INFO: File jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:27:57.443: INFO: Lookups using dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 failed for: [wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local] May 17 00:28:02.440: INFO: File wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:28:02.444: INFO: File jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:28:02.444: INFO: Lookups using dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 failed for: [wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local] May 17 00:28:07.440: INFO: File wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:28:07.443: INFO: File jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:28:07.443: INFO: Lookups using dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 failed for: [wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local] May 17 00:28:12.441: INFO: File wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:28:12.445: INFO: File jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local from pod dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 00:28:12.445: INFO: Lookups using dns-8895/dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 failed for: [wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local] May 17 00:28:17.444: INFO: DNS probes using dns-test-b37741ac-318b-4907-81f9-1f71e1d02e78 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8895.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8895.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8895.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8895.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 00:28:26.046: INFO: DNS probes using dns-test-8af71685-c0ec-4cb4-b16d-e38f63a66391 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:28:26.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8895" for this suite. • [SLOW TEST:45.980 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":123,"skipped":2089,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:28:26.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 17 00:28:26.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-9410 -- logs-generator --log-lines-total 100 --run-duration 20s' May 17 00:28:26.318: INFO: stderr: "" May 17 00:28:26.318: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 17 00:28:26.318: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 17 00:28:26.318: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9410" to be "running and ready, or succeeded" May 17 00:28:26.498: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 179.423087ms May 17 00:28:28.502: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18310827s May 17 00:28:30.507: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.188929141s May 17 00:28:30.507: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 17 00:28:30.507: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 17 00:28:30.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9410' May 17 00:28:30.617: INFO: stderr: "" May 17 00:28:30.617: INFO: stdout: "I0517 00:28:29.401777 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/xtd 319\nI0517 00:28:29.601927 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/q5kw 514\nI0517 00:28:29.802111 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/w9xv 340\nI0517 00:28:30.001946 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/d6q 528\nI0517 00:28:30.202005 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/k56 205\nI0517 00:28:30.401986 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/xjb 455\nI0517 00:28:30.601900 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/g5r 494\n" STEP: limiting log lines May 17 00:28:30.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9410 --tail=1' May 17 00:28:30.721: INFO: stderr: "" May 17 00:28:30.721: INFO: stdout: "I0517 00:28:30.601900 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/g5r 494\n" May 17 00:28:30.721: INFO: got output "I0517 00:28:30.601900 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/g5r 494\n" STEP: limiting log bytes May 17 00:28:30.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9410 --limit-bytes=1' May 17 00:28:30.828: INFO: stderr: "" May 17 00:28:30.829: INFO: stdout: "I" May 17 00:28:30.829: INFO: got output "I" STEP: exposing timestamps May 17 00:28:30.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9410 --tail=1 --timestamps' May 17 00:28:30.936: INFO: stderr: "" May 17 00:28:30.936: INFO: stdout: "2020-05-17T00:28:30.802062116Z I0517 00:28:30.801938 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/mmc5 535\n" May 17 00:28:30.936: INFO: got output "2020-05-17T00:28:30.802062116Z I0517 00:28:30.801938 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/mmc5 535\n" STEP: restricting to a time range May 17 00:28:33.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9410 --since=1s' May 17 00:28:33.556: INFO: stderr: "" May 17 00:28:33.556: INFO: stdout: "I0517 00:28:32.601975 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/9xds 357\nI0517 00:28:32.801971 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/kcr 430\nI0517 00:28:33.001944 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/ghsg 472\nI0517 00:28:33.201990 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/kv9 309\nI0517 00:28:33.402000 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/d92 269\n" May 17 00:28:33.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9410 --since=24h' May 17 00:28:33.660: INFO: stderr: "" May 17 00:28:33.660: INFO: stdout: "I0517 00:28:29.401777 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/xtd 319\nI0517 00:28:29.601927 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/q5kw 514\nI0517 00:28:29.802111 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/w9xv 340\nI0517 00:28:30.001946 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/d6q 528\nI0517 00:28:30.202005 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/k56 205\nI0517 00:28:30.401986 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/xjb 455\nI0517 00:28:30.601900 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/g5r 494\nI0517 00:28:30.801938 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/mmc5 535\nI0517 00:28:31.001964 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/l2bs 480\nI0517 00:28:31.201955 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/xw7p 432\nI0517 00:28:31.401931 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/8qvw 397\nI0517 00:28:31.601913 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/x2l 406\nI0517 00:28:31.801939 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/msx 267\nI0517 00:28:32.001946 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/m7z9 409\nI0517 00:28:32.201984 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/2rk 429\nI0517 00:28:32.401938 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/kbmn 555\nI0517 00:28:32.601975 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/9xds 357\nI0517 00:28:32.801971 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/kcr 430\nI0517 00:28:33.001944 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/ghsg 472\nI0517 00:28:33.201990 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/kv9 309\nI0517 00:28:33.402000 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/d92 269\nI0517 00:28:33.601977 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/gk6 585\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 17 00:28:33.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9410' May 17 00:28:45.281: INFO: stderr: "" May 17 00:28:45.281: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:28:45.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9410" for this suite. • [SLOW TEST:19.129 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":124,"skipped":2092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:28:45.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2215bc75-1a60-465d-86e2-0661cf56059d STEP: Creating a pod to test consume secrets May 17 00:28:45.430: INFO: Waiting up to 5m0s for pod "pod-secrets-a3ebc9a6-9dbd-4819-8583-6a91ea06d015" in namespace "secrets-7102" to be "Succeeded or Failed" May 17 00:28:45.438: INFO: Pod "pod-secrets-a3ebc9a6-9dbd-4819-8583-6a91ea06d015": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038235ms May 17 00:28:47.527: INFO: Pod "pod-secrets-a3ebc9a6-9dbd-4819-8583-6a91ea06d015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097155272s May 17 00:28:49.532: INFO: Pod "pod-secrets-a3ebc9a6-9dbd-4819-8583-6a91ea06d015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101587148s STEP: Saw pod success May 17 00:28:49.532: INFO: Pod "pod-secrets-a3ebc9a6-9dbd-4819-8583-6a91ea06d015" satisfied condition "Succeeded or Failed" May 17 00:28:49.535: INFO: Trying to get logs from node latest-worker pod pod-secrets-a3ebc9a6-9dbd-4819-8583-6a91ea06d015 container secret-volume-test: STEP: delete the pod May 17 00:28:49.591: INFO: Waiting for pod pod-secrets-a3ebc9a6-9dbd-4819-8583-6a91ea06d015 to disappear May 17 00:28:49.621: INFO: Pod pod-secrets-a3ebc9a6-9dbd-4819-8583-6a91ea06d015 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:28:49.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7102" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":125,"skipped":2129,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:28:49.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 17 00:28:49.776: INFO: Waiting up to 1m0s for all nodes to be ready May 17 00:29:49.822: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:29:49.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 17 00:29:53.939: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:30:10.142: INFO: pods created so far: [1 1 1] May 17 00:30:10.142: INFO: length of pods created so far: 3 May 17 00:30:18.151: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:30:25.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1831" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:30:25.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1989" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:95.629 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":126,"skipped":2146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:30:25.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 17 00:30:25.321: INFO: Waiting up to 5m0s for pod "downward-api-4cca985c-1c36-4924-bd65-a0dc103d4136" in namespace "downward-api-529" to be "Succeeded or Failed" May 17 00:30:25.361: INFO: Pod "downward-api-4cca985c-1c36-4924-bd65-a0dc103d4136": Phase="Pending", Reason="", readiness=false. Elapsed: 39.903189ms May 17 00:30:27.365: INFO: Pod "downward-api-4cca985c-1c36-4924-bd65-a0dc103d4136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043976545s May 17 00:30:29.369: INFO: Pod "downward-api-4cca985c-1c36-4924-bd65-a0dc103d4136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048154539s STEP: Saw pod success May 17 00:30:29.369: INFO: Pod "downward-api-4cca985c-1c36-4924-bd65-a0dc103d4136" satisfied condition "Succeeded or Failed" May 17 00:30:29.375: INFO: Trying to get logs from node latest-worker2 pod downward-api-4cca985c-1c36-4924-bd65-a0dc103d4136 container dapi-container: STEP: delete the pod May 17 00:30:29.406: INFO: Waiting for pod downward-api-4cca985c-1c36-4924-bd65-a0dc103d4136 to disappear May 17 00:30:29.498: INFO: Pod downward-api-4cca985c-1c36-4924-bd65-a0dc103d4136 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:30:29.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-529" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":127,"skipped":2186,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:30:29.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 17 00:30:29.947: INFO: Waiting up to 5m0s for pod "var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55" in namespace "var-expansion-3744" to be "Succeeded or Failed" May 17 00:30:30.366: INFO: Pod "var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55": Phase="Pending", Reason="", readiness=false. Elapsed: 419.427296ms May 17 00:30:32.481: INFO: Pod "var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534109415s May 17 00:30:34.749: INFO: Pod "var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.80189164s May 17 00:30:37.728: INFO: Pod "var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.780877424s STEP: Saw pod success May 17 00:30:37.728: INFO: Pod "var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55" satisfied condition "Succeeded or Failed" May 17 00:30:37.731: INFO: Trying to get logs from node latest-worker2 pod var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55 container dapi-container: STEP: delete the pod May 17 00:30:38.289: INFO: Waiting for pod var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55 to disappear May 17 00:30:38.292: INFO: Pod var-expansion-316f2ffe-5f8c-4f71-9f51-7895d7d81b55 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:30:38.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3744" for this suite. • [SLOW TEST:8.801 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":128,"skipped":2189,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:30:38.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-988205b7-089f-46d4-a8d1-640e533210bf STEP: Creating configMap with name cm-test-opt-upd-cbb18712-245c-40ce-a357-d2541d6c9e8f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-988205b7-089f-46d4-a8d1-640e533210bf STEP: Updating configmap cm-test-opt-upd-cbb18712-245c-40ce-a357-d2541d6c9e8f STEP: Creating configMap with name cm-test-opt-create-552726ea-d22c-4f80-bc5c-dff4748327fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:30:46.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4097" for this suite. • [SLOW TEST:8.375 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2208,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:30:46.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:30:46.742: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0aae8e49-9356-48e1-bd49-4b31971fbb5e" in namespace "projected-5069" to be "Succeeded or Failed" May 17 00:30:46.746: INFO: Pod "downwardapi-volume-0aae8e49-9356-48e1-bd49-4b31971fbb5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601803ms May 17 00:30:48.822: INFO: Pod "downwardapi-volume-0aae8e49-9356-48e1-bd49-4b31971fbb5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079624362s May 17 00:30:50.845: INFO: Pod "downwardapi-volume-0aae8e49-9356-48e1-bd49-4b31971fbb5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102992905s STEP: Saw pod success May 17 00:30:50.845: INFO: Pod "downwardapi-volume-0aae8e49-9356-48e1-bd49-4b31971fbb5e" satisfied condition "Succeeded or Failed" May 17 00:30:50.848: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0aae8e49-9356-48e1-bd49-4b31971fbb5e container client-container: STEP: delete the pod May 17 00:30:50.871: INFO: Waiting for pod downwardapi-volume-0aae8e49-9356-48e1-bd49-4b31971fbb5e to disappear May 17 00:30:50.875: INFO: Pod downwardapi-volume-0aae8e49-9356-48e1-bd49-4b31971fbb5e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:30:50.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5069" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:30:50.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-f255f9b3-8c7e-48a8-99a8-1b60d6e1f98c STEP: Creating a pod to test consume configMaps May 17 00:30:51.010: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba" in namespace "projected-2077" to be "Succeeded or Failed" May 17 00:30:51.013: INFO: Pod "pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462054ms May 17 00:30:53.017: INFO: Pod "pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006930601s May 17 00:30:55.024: INFO: Pod "pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013688331s May 17 00:30:57.067: INFO: Pod "pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056995471s STEP: Saw pod success May 17 00:30:57.067: INFO: Pod "pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba" satisfied condition "Succeeded or Failed" May 17 00:30:57.070: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba container projected-configmap-volume-test: STEP: delete the pod May 17 00:30:57.122: INFO: Waiting for pod pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba to disappear May 17 00:30:57.130: INFO: Pod pod-projected-configmaps-2ccc2c1a-a21c-4eb4-8e39-a41626b5f1ba no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:30:57.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2077" for this suite. • [SLOW TEST:6.254 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":131,"skipped":2235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:30:57.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 17 00:30:57.232: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 17 00:30:57.960: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 17 00:31:00.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272257, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272257, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272258, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272257, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 00:31:02.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272257, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272257, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272258, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272257, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 00:31:05.113: INFO: Waited 683.411785ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:31:05.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1060" for this suite. • [SLOW TEST:8.745 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":132,"skipped":2275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:31:05.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 17 00:31:06.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4293' May 17 00:31:06.467: INFO: stderr: "" May 17 00:31:06.467: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 17 00:31:11.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4293 -o json' May 17 00:31:11.617: INFO: stderr: "" May 17 00:31:11.617: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-17T00:31:06Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-17T00:31:06Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.156\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-17T00:31:10Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4293\",\n \"resourceVersion\": \"5287670\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4293/pods/e2e-test-httpd-pod\",\n \"uid\": \"a6ef3e50-419f-4282-b322-28652d68dd2a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5tm6s\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5tm6s\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5tm6s\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-17T00:31:06Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-17T00:31:10Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-17T00:31:10Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-17T00:31:06Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3f971eb981f344d311e6a2856cb1256f9dd647a527c47f52396e6c3d0fde3d19\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-17T00:31:09Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.156\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.156\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-17T00:31:06Z\"\n }\n}\n" STEP: replace the image in the pod May 17 00:31:11.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4293' May 17 00:31:11.937: INFO: stderr: "" May 17 00:31:11.937: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 17 00:31:11.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4293' May 17 00:31:15.462: INFO: stderr: "" May 17 00:31:15.462: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:31:15.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4293" for this suite. • [SLOW TEST:9.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":133,"skipped":2332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:31:15.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 17 00:31:15.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2038' May 17 00:31:15.831: INFO: stderr: "" May 17 00:31:15.831: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 17 00:31:16.836: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:31:16.836: INFO: Found 0 / 1 May 17 00:31:17.836: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:31:17.836: INFO: Found 0 / 1 May 17 00:31:18.836: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:31:18.836: INFO: Found 1 / 1 May 17 00:31:18.836: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 17 00:31:18.840: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:31:18.840: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 17 00:31:18.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-px42x --namespace=kubectl-2038 -p {"metadata":{"annotations":{"x":"y"}}}' May 17 00:31:18.949: INFO: stderr: "" May 17 00:31:18.949: INFO: stdout: "pod/agnhost-master-px42x patched\n" STEP: checking annotations May 17 00:31:19.007: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:31:19.007: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:31:19.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2038" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":134,"skipped":2368,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:31:19.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 17 00:31:19.073: INFO: Created pod &Pod{ObjectMeta:{dns-6088 dns-6088 /api/v1/namespaces/dns-6088/pods/dns-6088 b38f2371-0efa-47b9-b792-f1be9aa0f8da 5287745 0 2020-05-17 00:31:19 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-17 00:31:19 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9pxgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9pxgt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9pxgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:31:19.101: INFO: The status of Pod dns-6088 is Pending, waiting for it to be Running (with Ready = true) May 17 00:31:21.133: INFO: The status of Pod dns-6088 is Pending, waiting for it to be Running (with Ready = true) May 17 00:31:23.106: INFO: The status of Pod dns-6088 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 17 00:31:23.106: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6088 PodName:dns-6088 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:31:23.106: INFO: >>> kubeConfig: /root/.kube/config I0517 00:31:23.140210 7 log.go:172] (0xc003422840) (0xc00131d220) Create stream I0517 00:31:23.140280 7 log.go:172] (0xc003422840) (0xc00131d220) Stream added, broadcasting: 1 I0517 00:31:23.141966 7 log.go:172] (0xc003422840) Reply frame received for 1 I0517 00:31:23.142017 7 log.go:172] (0xc003422840) (0xc001888140) Create stream I0517 00:31:23.142037 7 log.go:172] (0xc003422840) (0xc001888140) Stream added, broadcasting: 3 I0517 00:31:23.143010 7 log.go:172] (0xc003422840) Reply frame received for 3 I0517 00:31:23.143056 7 log.go:172] (0xc003422840) (0xc001b795e0) Create stream I0517 00:31:23.143067 7 log.go:172] (0xc003422840) (0xc001b795e0) Stream added, broadcasting: 5 I0517 00:31:23.143889 7 log.go:172] (0xc003422840) Reply frame received for 5 I0517 00:31:23.216767 7 log.go:172] (0xc003422840) Data frame received for 3 I0517 00:31:23.216799 7 log.go:172] (0xc001888140) (3) Data frame handling I0517 00:31:23.216816 7 log.go:172] (0xc001888140) (3) Data frame sent I0517 00:31:23.219484 7 log.go:172] (0xc003422840) Data frame received for 3 I0517 00:31:23.219512 7 log.go:172] (0xc001888140) (3) Data frame handling I0517 00:31:23.219530 7 log.go:172] (0xc003422840) Data frame received for 5 I0517 00:31:23.219541 7 log.go:172] (0xc001b795e0) (5) Data frame handling I0517 00:31:23.221329 7 log.go:172] (0xc003422840) Data frame received for 1 I0517 00:31:23.221355 7 log.go:172] (0xc00131d220) (1) Data frame handling I0517 00:31:23.221370 7 log.go:172] (0xc00131d220) (1) Data frame sent I0517 00:31:23.221389 7 log.go:172] (0xc003422840) (0xc00131d220) Stream removed, broadcasting: 1 I0517 00:31:23.221525 7 log.go:172] (0xc003422840) (0xc00131d220) Stream removed, broadcasting: 1 I0517 00:31:23.221546 7 log.go:172] (0xc003422840) (0xc001888140) Stream removed, broadcasting: 3 I0517 00:31:23.221615 7 log.go:172] (0xc003422840) Go away received I0517 00:31:23.221680 7 log.go:172] (0xc003422840) (0xc001b795e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 17 00:31:23.221: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6088 PodName:dns-6088 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:31:23.221: INFO: >>> kubeConfig: /root/.kube/config I0517 00:31:23.252275 7 log.go:172] (0xc0034074a0) (0xc001b79a40) Create stream I0517 00:31:23.252315 7 log.go:172] (0xc0034074a0) (0xc001b79a40) Stream added, broadcasting: 1 I0517 00:31:23.254113 7 log.go:172] (0xc0034074a0) Reply frame received for 1 I0517 00:31:23.254150 7 log.go:172] (0xc0034074a0) (0xc00131d2c0) Create stream I0517 00:31:23.254168 7 log.go:172] (0xc0034074a0) (0xc00131d2c0) Stream added, broadcasting: 3 I0517 00:31:23.255189 7 log.go:172] (0xc0034074a0) Reply frame received for 3 I0517 00:31:23.255220 7 log.go:172] (0xc0034074a0) (0xc0017f2000) Create stream I0517 00:31:23.255230 7 log.go:172] (0xc0034074a0) (0xc0017f2000) Stream added, broadcasting: 5 I0517 00:31:23.256149 7 log.go:172] (0xc0034074a0) Reply frame received for 5 I0517 00:31:23.323193 7 log.go:172] (0xc0034074a0) Data frame received for 3 I0517 00:31:23.323225 7 log.go:172] (0xc00131d2c0) (3) Data frame handling I0517 00:31:23.323242 7 log.go:172] (0xc00131d2c0) (3) Data frame sent I0517 00:31:23.325744 7 log.go:172] (0xc0034074a0) Data frame received for 3 I0517 00:31:23.325770 7 log.go:172] (0xc00131d2c0) (3) Data frame handling I0517 00:31:23.325948 7 log.go:172] (0xc0034074a0) Data frame received for 5 I0517 00:31:23.325971 7 log.go:172] (0xc0017f2000) (5) Data frame handling I0517 00:31:23.327652 7 log.go:172] (0xc0034074a0) Data frame received for 1 I0517 00:31:23.327681 7 log.go:172] (0xc001b79a40) (1) Data frame handling I0517 00:31:23.327708 7 log.go:172] (0xc001b79a40) (1) Data frame sent I0517 00:31:23.327729 7 log.go:172] (0xc0034074a0) (0xc001b79a40) Stream removed, broadcasting: 1 I0517 00:31:23.327804 7 log.go:172] (0xc0034074a0) (0xc001b79a40) Stream removed, broadcasting: 1 I0517 00:31:23.327816 7 log.go:172] (0xc0034074a0) (0xc00131d2c0) Stream removed, broadcasting: 3 I0517 00:31:23.327985 7 log.go:172] (0xc0034074a0) Go away received I0517 00:31:23.328067 7 log.go:172] (0xc0034074a0) (0xc0017f2000) Stream removed, broadcasting: 5 May 17 00:31:23.328: INFO: Deleting pod dns-6088... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:31:23.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6088" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":135,"skipped":2383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:31:23.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-b98c7058-6756-4dc8-9454-80d4d932eb39 STEP: Creating a pod to test consume configMaps May 17 00:31:23.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9" in namespace "configmap-4173" to be "Succeeded or Failed" May 17 00:31:23.645: INFO: Pod "pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 86.452512ms May 17 00:31:25.906: INFO: Pod "pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347281681s May 17 00:31:27.911: INFO: Pod "pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351909714s May 17 00:31:29.914: INFO: Pod "pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.355445198s STEP: Saw pod success May 17 00:31:29.914: INFO: Pod "pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9" satisfied condition "Succeeded or Failed" May 17 00:31:29.918: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9 container configmap-volume-test: STEP: delete the pod May 17 00:31:29.945: INFO: Waiting for pod pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9 to disappear May 17 00:31:29.953: INFO: Pod pod-configmaps-aa0322d3-bb42-4e24-b788-85046e1d7ca9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:31:29.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4173" for this suite. • [SLOW TEST:6.494 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2412,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:31:29.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:31:30.019: INFO: Creating ReplicaSet my-hostname-basic-4ab59afe-937e-48c0-a6cb-815f09872de5 May 17 00:31:30.041: INFO: Pod name my-hostname-basic-4ab59afe-937e-48c0-a6cb-815f09872de5: Found 0 pods out of 1 May 17 00:31:35.048: INFO: Pod name my-hostname-basic-4ab59afe-937e-48c0-a6cb-815f09872de5: Found 1 pods out of 1 May 17 00:31:35.048: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-4ab59afe-937e-48c0-a6cb-815f09872de5" is running May 17 00:31:35.055: INFO: Pod "my-hostname-basic-4ab59afe-937e-48c0-a6cb-815f09872de5-kd8q2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 00:31:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 00:31:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 00:31:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 00:31:30 +0000 UTC Reason: Message:}]) May 17 00:31:35.056: INFO: Trying to dial the pod May 17 00:31:40.066: INFO: Controller my-hostname-basic-4ab59afe-937e-48c0-a6cb-815f09872de5: Got expected result from replica 1 [my-hostname-basic-4ab59afe-937e-48c0-a6cb-815f09872de5-kd8q2]: "my-hostname-basic-4ab59afe-937e-48c0-a6cb-815f09872de5-kd8q2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:31:40.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1412" for this suite. • [SLOW TEST:10.113 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":137,"skipped":2412,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:31:40.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-d469076b-e58c-4089-89e5-c5eebffada18 STEP: Creating a pod to test consume configMaps May 17 00:31:40.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-b427eb7a-6e62-4a99-a2ac-92fd08d08db7" in namespace "configmap-6335" to be "Succeeded or Failed" May 17 00:31:40.218: INFO: Pod "pod-configmaps-b427eb7a-6e62-4a99-a2ac-92fd08d08db7": Phase="Pending", Reason="", readiness=false. Elapsed: 80.100504ms May 17 00:31:42.222: INFO: Pod "pod-configmaps-b427eb7a-6e62-4a99-a2ac-92fd08d08db7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084168563s May 17 00:31:44.225: INFO: Pod "pod-configmaps-b427eb7a-6e62-4a99-a2ac-92fd08d08db7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087599722s STEP: Saw pod success May 17 00:31:44.225: INFO: Pod "pod-configmaps-b427eb7a-6e62-4a99-a2ac-92fd08d08db7" satisfied condition "Succeeded or Failed" May 17 00:31:44.228: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b427eb7a-6e62-4a99-a2ac-92fd08d08db7 container configmap-volume-test: STEP: delete the pod May 17 00:31:44.283: INFO: Waiting for pod pod-configmaps-b427eb7a-6e62-4a99-a2ac-92fd08d08db7 to disappear May 17 00:31:44.287: INFO: Pod pod-configmaps-b427eb7a-6e62-4a99-a2ac-92fd08d08db7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:31:44.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6335" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:31:44.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:31:44.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd" in namespace "downward-api-4142" to be "Succeeded or Failed" May 17 00:31:44.459: INFO: Pod "downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.18913ms May 17 00:31:46.487: INFO: Pod "downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063298338s May 17 00:31:48.491: INFO: Pod "downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd": Phase="Running", Reason="", readiness=true. Elapsed: 4.067716269s May 17 00:31:50.495: INFO: Pod "downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071372499s STEP: Saw pod success May 17 00:31:50.495: INFO: Pod "downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd" satisfied condition "Succeeded or Failed" May 17 00:31:50.497: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd container client-container: STEP: delete the pod May 17 00:31:50.516: INFO: Waiting for pod downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd to disappear May 17 00:31:50.521: INFO: Pod downwardapi-volume-3dc93bc9-f4dd-429d-abf1-12eac15997dd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:31:50.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4142" for this suite. • [SLOW TEST:6.231 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":139,"skipped":2459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:31:50.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0517 00:32:00.639182 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 00:32:00.639: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:32:00.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4154" for this suite. • [SLOW TEST:10.120 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":140,"skipped":2484,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:32:00.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 17 00:32:05.280: INFO: Successfully updated pod "labelsupdateeef4f134-3b01-4e2d-a00b-e6400de85106" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:32:07.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9236" for this suite. • [SLOW TEST:6.671 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":141,"skipped":2486,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:32:07.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:32:07.428: INFO: Waiting up to 5m0s for pod "downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb" in namespace "downward-api-8926" to be "Succeeded or Failed" May 17 00:32:07.450: INFO: Pod "downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.516328ms May 17 00:32:09.475: INFO: Pod "downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04621548s May 17 00:32:11.478: INFO: Pod "downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb": Phase="Running", Reason="", readiness=true. Elapsed: 4.049990198s May 17 00:32:13.483: INFO: Pod "downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054350612s STEP: Saw pod success May 17 00:32:13.483: INFO: Pod "downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb" satisfied condition "Succeeded or Failed" May 17 00:32:13.486: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb container client-container: STEP: delete the pod May 17 00:32:13.520: INFO: Waiting for pod downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb to disappear May 17 00:32:13.532: INFO: Pod downwardapi-volume-470939e0-0f58-494d-82d7-0d497270acfb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:32:13.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8926" for this suite. • [SLOW TEST:6.245 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2501,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:32:13.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 17 00:32:13.696: INFO: Waiting up to 5m0s for pod "downward-api-f9e8f26c-9f7d-4c10-adc0-eb188cddacb3" in namespace "downward-api-7545" to be "Succeeded or Failed" May 17 00:32:13.760: INFO: Pod "downward-api-f9e8f26c-9f7d-4c10-adc0-eb188cddacb3": Phase="Pending", Reason="", readiness=false. Elapsed: 63.480043ms May 17 00:32:15.798: INFO: Pod "downward-api-f9e8f26c-9f7d-4c10-adc0-eb188cddacb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101947872s May 17 00:32:17.883: INFO: Pod "downward-api-f9e8f26c-9f7d-4c10-adc0-eb188cddacb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186295158s STEP: Saw pod success May 17 00:32:17.883: INFO: Pod "downward-api-f9e8f26c-9f7d-4c10-adc0-eb188cddacb3" satisfied condition "Succeeded or Failed" May 17 00:32:17.886: INFO: Trying to get logs from node latest-worker2 pod downward-api-f9e8f26c-9f7d-4c10-adc0-eb188cddacb3 container dapi-container: STEP: delete the pod May 17 00:32:17.954: INFO: Waiting for pod downward-api-f9e8f26c-9f7d-4c10-adc0-eb188cddacb3 to disappear May 17 00:32:17.963: INFO: Pod downward-api-f9e8f26c-9f7d-4c10-adc0-eb188cddacb3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:32:17.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7545" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2508,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:32:17.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 17 00:32:18.052: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:32:24.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6404" for this suite. • [SLOW TEST:6.729 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":144,"skipped":2509,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:32:24.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:32:24.800: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7c9df3a2-87d2-4ff7-bb0b-d9ab818a963b" in namespace "security-context-test-966" to be "Succeeded or Failed" May 17 00:32:24.804: INFO: Pod "busybox-readonly-false-7c9df3a2-87d2-4ff7-bb0b-d9ab818a963b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.716895ms May 17 00:32:26.808: INFO: Pod "busybox-readonly-false-7c9df3a2-87d2-4ff7-bb0b-d9ab818a963b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008188496s May 17 00:32:28.812: INFO: Pod "busybox-readonly-false-7c9df3a2-87d2-4ff7-bb0b-d9ab818a963b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011760486s May 17 00:32:28.812: INFO: Pod "busybox-readonly-false-7c9df3a2-87d2-4ff7-bb0b-d9ab818a963b" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:32:28.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-966" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2519,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:32:28.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-709cc918-806e-4c8d-b408-1d8f51c28fbb STEP: Creating a pod to test consume secrets May 17 00:32:28.900: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7c33c2b9-5658-49a6-9377-96d0cadfdac5" in namespace "projected-6264" to be "Succeeded or Failed" May 17 00:32:28.904: INFO: Pod "pod-projected-secrets-7c33c2b9-5658-49a6-9377-96d0cadfdac5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02861ms May 17 00:32:30.908: INFO: Pod "pod-projected-secrets-7c33c2b9-5658-49a6-9377-96d0cadfdac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007798679s May 17 00:32:32.911: INFO: Pod "pod-projected-secrets-7c33c2b9-5658-49a6-9377-96d0cadfdac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01108523s STEP: Saw pod success May 17 00:32:32.911: INFO: Pod "pod-projected-secrets-7c33c2b9-5658-49a6-9377-96d0cadfdac5" satisfied condition "Succeeded or Failed" May 17 00:32:32.913: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-7c33c2b9-5658-49a6-9377-96d0cadfdac5 container projected-secret-volume-test: STEP: delete the pod May 17 00:32:33.004: INFO: Waiting for pod pod-projected-secrets-7c33c2b9-5658-49a6-9377-96d0cadfdac5 to disappear May 17 00:32:33.012: INFO: Pod pod-projected-secrets-7c33c2b9-5658-49a6-9377-96d0cadfdac5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:32:33.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6264" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:32:33.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7376.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7376.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7376.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7376.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7376.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7376.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 00:32:39.241: INFO: DNS probes using dns-7376/dns-test-6cca0e1f-a5ce-451c-9788-33d582608b46 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:32:39.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7376" for this suite. • [SLOW TEST:6.339 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":147,"skipped":2568,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:32:39.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-dbee81ff-e50b-40ac-b43f-fa059f3b66c7 in namespace container-probe-9443 May 17 00:32:43.836: INFO: Started pod liveness-dbee81ff-e50b-40ac-b43f-fa059f3b66c7 in namespace container-probe-9443 STEP: checking the pod's current state and verifying that restartCount is present May 17 00:32:43.839: INFO: Initial restart count of pod liveness-dbee81ff-e50b-40ac-b43f-fa059f3b66c7 is 0 May 17 00:33:05.898: INFO: Restart count of pod container-probe-9443/liveness-dbee81ff-e50b-40ac-b43f-fa059f3b66c7 is now 1 (22.059737613s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:33:05.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9443" for this suite. • [SLOW TEST:26.680 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2576,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:33:06.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 17 00:33:06.360: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 17 00:33:06.424: INFO: Waiting for terminating namespaces to be deleted... May 17 00:33:06.427: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 17 00:33:06.431: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 17 00:33:06.431: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 17 00:33:06.431: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 17 00:33:06.431: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 17 00:33:06.431: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 17 00:33:06.431: INFO: Container kindnet-cni ready: true, restart count 0 May 17 00:33:06.431: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 17 00:33:06.431: INFO: Container kube-proxy ready: true, restart count 0 May 17 00:33:06.431: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 17 00:33:06.436: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 17 00:33:06.436: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 17 00:33:06.436: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 17 00:33:06.436: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 17 00:33:06.436: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 17 00:33:06.436: INFO: Container kindnet-cni ready: true, restart count 0 May 17 00:33:06.436: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 17 00:33:06.436: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 17 00:33:06.512: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 17 00:33:06.512: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 17 00:33:06.512: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 17 00:33:06.512: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 17 00:33:06.512: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 17 00:33:06.512: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 17 00:33:06.512: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 17 00:33:06.543: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5b9575f2-2010-446d-a64e-c694b1e36bf8.160fa986068341c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5675/filler-pod-5b9575f2-2010-446d-a64e-c694b1e36bf8 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5b9575f2-2010-446d-a64e-c694b1e36bf8.160fa986b7fbcee6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5b9575f2-2010-446d-a64e-c694b1e36bf8.160fa98714d27a24], Reason = [Created], Message = [Created container filler-pod-5b9575f2-2010-446d-a64e-c694b1e36bf8] STEP: Considering event: Type = [Normal], Name = [filler-pod-5b9575f2-2010-446d-a64e-c694b1e36bf8.160fa98726b28cb9], Reason = [Started], Message = [Started container filler-pod-5b9575f2-2010-446d-a64e-c694b1e36bf8] STEP: Considering event: Type = [Normal], Name = [filler-pod-c3f5cb84-3555-4ac3-b4ae-257102e8000e.160fa9860499a6ff], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5675/filler-pod-c3f5cb84-3555-4ac3-b4ae-257102e8000e to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c3f5cb84-3555-4ac3-b4ae-257102e8000e.160fa9866521b768], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c3f5cb84-3555-4ac3-b4ae-257102e8000e.160fa986b4f87ca2], Reason = [Created], Message = [Created container filler-pod-c3f5cb84-3555-4ac3-b4ae-257102e8000e] STEP: Considering event: Type = [Normal], Name = [filler-pod-c3f5cb84-3555-4ac3-b4ae-257102e8000e.160fa986cc4c4a1f], Reason = [Started], Message = [Started container filler-pod-c3f5cb84-3555-4ac3-b4ae-257102e8000e] STEP: Considering event: Type = [Warning], Name = [additional-pod.160fa9878905831c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160fa98789fedb50], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:33:14.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5675" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.121 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":149,"skipped":2577,"failed":0} SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:33:14.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:33:14.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7664" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":150,"skipped":2583,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:33:14.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 17 00:33:14.318: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:33:14.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1433" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":151,"skipped":2600,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:33:14.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:33:14.842: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:33:16.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272394, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272394, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272395, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272394, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 00:33:18.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272394, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272394, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272395, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272394, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:33:21.888: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:33:22.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1090" for this suite. STEP: Destroying namespace "webhook-1090-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.965 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":152,"skipped":2622,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:33:22.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 17 00:33:22.425: INFO: namespace kubectl-498 May 17 00:33:22.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-498' May 17 00:33:22.664: INFO: stderr: "" May 17 00:33:22.664: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 17 00:33:23.668: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:33:23.668: INFO: Found 0 / 1 May 17 00:33:24.668: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:33:24.668: INFO: Found 0 / 1 May 17 00:33:25.668: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:33:25.668: INFO: Found 0 / 1 May 17 00:33:26.668: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:33:26.668: INFO: Found 1 / 1 May 17 00:33:26.669: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 17 00:33:26.671: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:33:26.671: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 17 00:33:26.671: INFO: wait on agnhost-master startup in kubectl-498 May 17 00:33:26.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-26tgx agnhost-master --namespace=kubectl-498' May 17 00:33:26.783: INFO: stderr: "" May 17 00:33:26.784: INFO: stdout: "Paused\n" STEP: exposing RC May 17 00:33:26.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-498' May 17 00:33:26.998: INFO: stderr: "" May 17 00:33:26.998: INFO: stdout: "service/rm2 exposed\n" May 17 00:33:27.017: INFO: Service rm2 in namespace kubectl-498 found. STEP: exposing service May 17 00:33:29.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-498' May 17 00:33:29.268: INFO: stderr: "" May 17 00:33:29.268: INFO: stdout: "service/rm3 exposed\n" May 17 00:33:29.289: INFO: Service rm3 in namespace kubectl-498 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:33:31.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-498" for this suite. • [SLOW TEST:8.915 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":153,"skipped":2644,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:33:31.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5594 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5594 STEP: creating replication controller externalsvc in namespace services-5594 I0517 00:33:31.527278 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5594, replica count: 2 I0517 00:33:34.577683 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:33:37.577921 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 17 00:33:37.680: INFO: Creating new exec pod May 17 00:33:41.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5594 execpod2rnkb -- /bin/sh -x -c nslookup nodeport-service' May 17 00:33:41.964: INFO: stderr: "I0517 00:33:41.843835 3555 log.go:172] (0xc000957130) (0xc000aec6e0) Create stream\nI0517 00:33:41.843897 3555 log.go:172] (0xc000957130) (0xc000aec6e0) Stream added, broadcasting: 1\nI0517 00:33:41.848731 3555 log.go:172] (0xc000957130) Reply frame received for 1\nI0517 00:33:41.848771 3555 log.go:172] (0xc000957130) (0xc0006461e0) Create stream\nI0517 00:33:41.848788 3555 log.go:172] (0xc000957130) (0xc0006461e0) Stream added, broadcasting: 3\nI0517 00:33:41.849891 3555 log.go:172] (0xc000957130) Reply frame received for 3\nI0517 00:33:41.849939 3555 log.go:172] (0xc000957130) (0xc000524000) Create stream\nI0517 00:33:41.849955 3555 log.go:172] (0xc000957130) (0xc000524000) Stream added, broadcasting: 5\nI0517 00:33:41.851087 3555 log.go:172] (0xc000957130) Reply frame received for 5\nI0517 00:33:41.936888 3555 log.go:172] (0xc000957130) Data frame received for 5\nI0517 00:33:41.936936 3555 log.go:172] (0xc000524000) (5) Data frame handling\nI0517 00:33:41.936976 3555 log.go:172] (0xc000524000) (5) Data frame sent\n+ nslookup nodeport-service\nI0517 00:33:41.954014 3555 log.go:172] (0xc000957130) Data frame received for 3\nI0517 00:33:41.954054 3555 log.go:172] (0xc0006461e0) (3) Data frame handling\nI0517 00:33:41.954088 3555 log.go:172] (0xc0006461e0) (3) Data frame sent\nI0517 00:33:41.955078 3555 log.go:172] (0xc000957130) Data frame received for 3\nI0517 00:33:41.955106 3555 log.go:172] (0xc0006461e0) (3) Data frame handling\nI0517 00:33:41.955128 3555 log.go:172] (0xc0006461e0) (3) Data frame sent\nI0517 00:33:41.955383 3555 log.go:172] (0xc000957130) Data frame received for 5\nI0517 00:33:41.955406 3555 log.go:172] (0xc000524000) (5) Data frame handling\nI0517 00:33:41.955570 3555 log.go:172] (0xc000957130) Data frame received for 3\nI0517 00:33:41.955601 3555 log.go:172] (0xc0006461e0) (3) Data frame handling\nI0517 00:33:41.957790 3555 log.go:172] (0xc000957130) Data frame received for 1\nI0517 00:33:41.957812 3555 log.go:172] (0xc000aec6e0) (1) Data frame handling\nI0517 00:33:41.957828 3555 log.go:172] (0xc000aec6e0) (1) Data frame sent\nI0517 00:33:41.957852 3555 log.go:172] (0xc000957130) (0xc000aec6e0) Stream removed, broadcasting: 1\nI0517 00:33:41.957883 3555 log.go:172] (0xc000957130) Go away received\nI0517 00:33:41.958292 3555 log.go:172] (0xc000957130) (0xc000aec6e0) Stream removed, broadcasting: 1\nI0517 00:33:41.958321 3555 log.go:172] (0xc000957130) (0xc0006461e0) Stream removed, broadcasting: 3\nI0517 00:33:41.958337 3555 log.go:172] (0xc000957130) (0xc000524000) Stream removed, broadcasting: 5\n" May 17 00:33:41.964: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5594.svc.cluster.local\tcanonical name = externalsvc.services-5594.svc.cluster.local.\nName:\texternalsvc.services-5594.svc.cluster.local\nAddress: 10.100.235.238\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5594, will wait for the garbage collector to delete the pods May 17 00:33:42.023: INFO: Deleting ReplicationController externalsvc took: 5.350567ms May 17 00:33:42.323: INFO: Terminating ReplicationController externalsvc pods took: 300.300953ms May 17 00:33:55.291: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:33:55.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5594" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:24.046 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":154,"skipped":2645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:33:55.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4114/configmap-test-dfd6e6d8-435e-43ac-8ffd-7dee3ce873af STEP: Creating a pod to test consume configMaps May 17 00:33:55.458: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd2dffa0-abed-4c47-ae4d-f4c0b0f8cb25" in namespace "configmap-4114" to be "Succeeded or Failed" May 17 00:33:55.461: INFO: Pod "pod-configmaps-cd2dffa0-abed-4c47-ae4d-f4c0b0f8cb25": Phase="Pending", Reason="", readiness=false. Elapsed: 3.752165ms May 17 00:33:57.466: INFO: Pod "pod-configmaps-cd2dffa0-abed-4c47-ae4d-f4c0b0f8cb25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007961634s May 17 00:33:59.470: INFO: Pod "pod-configmaps-cd2dffa0-abed-4c47-ae4d-f4c0b0f8cb25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012022813s STEP: Saw pod success May 17 00:33:59.470: INFO: Pod "pod-configmaps-cd2dffa0-abed-4c47-ae4d-f4c0b0f8cb25" satisfied condition "Succeeded or Failed" May 17 00:33:59.473: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-cd2dffa0-abed-4c47-ae4d-f4c0b0f8cb25 container env-test: STEP: delete the pod May 17 00:33:59.507: INFO: Waiting for pod pod-configmaps-cd2dffa0-abed-4c47-ae4d-f4c0b0f8cb25 to disappear May 17 00:33:59.513: INFO: Pod pod-configmaps-cd2dffa0-abed-4c47-ae4d-f4c0b0f8cb25 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:33:59.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4114" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":155,"skipped":2681,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:33:59.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 17 00:34:06.164: INFO: Successfully updated pod "annotationupdatefa1d44dc-e328-4030-936d-2fb4903bf415" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:34:08.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4963" for this suite. • [SLOW TEST:8.679 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2701,"failed":0} S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:34:08.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:34:08.340: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 17 00:34:08.360: INFO: Pod name sample-pod: Found 0 pods out of 1 May 17 00:34:13.363: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 17 00:34:13.363: INFO: Creating deployment "test-rolling-update-deployment" May 17 00:34:13.372: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 17 00:34:13.380: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 17 00:34:15.388: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 17 00:34:15.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272453, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272453, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272453, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725272453, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 00:34:17.397: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 17 00:34:17.409: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3638 /apis/apps/v1/namespaces/deployment-3638/deployments/test-rolling-update-deployment 2d6edd7e-3670-4105-a704-100323f9f8ef 5289100 1 2020-05-17 00:34:13 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-17 00:34:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-17 00:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f214a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-17 00:34:13 +0000 UTC,LastTransitionTime:2020-05-17 00:34:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-17 00:34:17 +0000 UTC,LastTransitionTime:2020-05-17 00:34:13 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 17 00:34:17.411: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-3638 /apis/apps/v1/namespaces/deployment-3638/replicasets/test-rolling-update-deployment-df7bb669b 67defc99-9233-407f-910d-c1a452ffec51 5289089 1 2020-05-17 00:34:13 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2d6edd7e-3670-4105-a704-100323f9f8ef 0xc003e6e0a0 0xc003e6e0a1}] [] [{kube-controller-manager Update apps/v1 2020-05-17 00:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d6edd7e-3670-4105-a704-100323f9f8ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e6e118 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 17 00:34:17.411: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 17 00:34:17.411: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3638 /apis/apps/v1/namespaces/deployment-3638/replicasets/test-rolling-update-controller c71211a1-c85d-487e-bfef-b4abeb941b36 5289099 2 2020-05-17 00:34:08 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2d6edd7e-3670-4105-a704-100323f9f8ef 0xc003fa1f97 0xc003fa1f98}] [] [{e2e.test Update apps/v1 2020-05-17 00:34:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-17 00:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d6edd7e-3670-4105-a704-100323f9f8ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003e6e038 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 17 00:34:17.413: INFO: Pod "test-rolling-update-deployment-df7bb669b-ln69m" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-ln69m test-rolling-update-deployment-df7bb669b- deployment-3638 /api/v1/namespaces/deployment-3638/pods/test-rolling-update-deployment-df7bb669b-ln69m e0a9dc82-4664-4d18-abe9-da6b051a40d1 5289088 0 2020-05-17 00:34:13 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 67defc99-9233-407f-910d-c1a452ffec51 0xc003e96410 0xc003e96411}] [] [{kube-controller-manager Update v1 2020-05-17 00:34:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67defc99-9233-407f-910d-c1a452ffec51\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:34:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.170\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-69ps2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-69ps2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-69ps2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:34:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:34:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.170,StartTime:2020-05-17 00:34:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:34:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://6dbfde41bfb242f014255a248276719dfbaa9d341e76ec3465a5fcc0ff63e2d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:34:17.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3638" for this suite. • [SLOW TEST:9.219 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":157,"skipped":2702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:34:17.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:34:21.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3355" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2754,"failed":0} SSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:34:21.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:34:21.666: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3136 I0517 00:34:21.686173 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3136, replica count: 1 I0517 00:34:22.736553 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:34:23.736786 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:34:24.737038 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:34:25.737295 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 00:34:25.864: INFO: Created: latency-svc-sfs5c May 17 00:34:25.876: INFO: Got endpoints: latency-svc-sfs5c [38.705488ms] May 17 00:34:25.899: INFO: Created: latency-svc-nfssz May 17 00:34:25.911: INFO: Got endpoints: latency-svc-nfssz [34.733407ms] May 17 00:34:25.955: INFO: Created: latency-svc-tp7vd May 17 00:34:25.985: INFO: Got endpoints: latency-svc-tp7vd [108.493127ms] May 17 00:34:26.007: INFO: Created: latency-svc-vn7dv May 17 00:34:26.019: INFO: Got endpoints: latency-svc-vn7dv [143.027325ms] May 17 00:34:26.099: INFO: Created: latency-svc-7b7zt May 17 00:34:26.121: INFO: Created: latency-svc-4w945 May 17 00:34:26.122: INFO: Got endpoints: latency-svc-7b7zt [245.794042ms] May 17 00:34:26.146: INFO: Got endpoints: latency-svc-4w945 [269.661849ms] May 17 00:34:26.168: INFO: Created: latency-svc-rjp2n May 17 00:34:26.183: INFO: Got endpoints: latency-svc-rjp2n [306.752418ms] May 17 00:34:26.198: INFO: Created: latency-svc-fnclc May 17 00:34:26.260: INFO: Got endpoints: latency-svc-fnclc [383.45103ms] May 17 00:34:26.273: INFO: Created: latency-svc-bw82j May 17 00:34:26.302: INFO: Got endpoints: latency-svc-bw82j [425.699013ms] May 17 00:34:26.338: INFO: Created: latency-svc-sp9pd May 17 00:34:26.392: INFO: Got endpoints: latency-svc-sp9pd [515.789766ms] May 17 00:34:26.433: INFO: Created: latency-svc-97qmz May 17 00:34:26.446: INFO: Got endpoints: latency-svc-97qmz [569.985028ms] May 17 00:34:26.469: INFO: Created: latency-svc-bsqwn May 17 00:34:26.482: INFO: Got endpoints: latency-svc-bsqwn [606.24912ms] May 17 00:34:26.530: INFO: Created: latency-svc-xctd4 May 17 00:34:26.570: INFO: Got endpoints: latency-svc-xctd4 [694.003976ms] May 17 00:34:26.608: INFO: Created: latency-svc-n68n5 May 17 00:34:26.674: INFO: Got endpoints: latency-svc-n68n5 [797.73283ms] May 17 00:34:26.714: INFO: Created: latency-svc-782qf May 17 00:34:26.730: INFO: Got endpoints: latency-svc-782qf [853.414345ms] May 17 00:34:26.763: INFO: Created: latency-svc-dswnk May 17 00:34:26.811: INFO: Got endpoints: latency-svc-dswnk [934.890699ms] May 17 00:34:26.815: INFO: Created: latency-svc-42qvg May 17 00:34:26.827: INFO: Got endpoints: latency-svc-42qvg [915.920336ms] May 17 00:34:26.847: INFO: Created: latency-svc-92k4z May 17 00:34:26.857: INFO: Got endpoints: latency-svc-92k4z [870.635173ms] May 17 00:34:26.877: INFO: Created: latency-svc-mjgd5 May 17 00:34:26.900: INFO: Got endpoints: latency-svc-mjgd5 [881.040726ms] May 17 00:34:26.977: INFO: Created: latency-svc-dlhvt May 17 00:34:26.983: INFO: Got endpoints: latency-svc-dlhvt [861.39743ms] May 17 00:34:27.015: INFO: Created: latency-svc-g5ndv May 17 00:34:27.026: INFO: Got endpoints: latency-svc-g5ndv [879.867029ms] May 17 00:34:27.069: INFO: Created: latency-svc-qgs6x May 17 00:34:27.117: INFO: Got endpoints: latency-svc-qgs6x [933.576605ms] May 17 00:34:27.135: INFO: Created: latency-svc-md2gl May 17 00:34:27.147: INFO: Got endpoints: latency-svc-md2gl [887.424452ms] May 17 00:34:27.165: INFO: Created: latency-svc-kw9pb May 17 00:34:27.202: INFO: Got endpoints: latency-svc-kw9pb [899.675681ms] May 17 00:34:27.284: INFO: Created: latency-svc-vxlnb May 17 00:34:27.291: INFO: Got endpoints: latency-svc-vxlnb [898.633734ms] May 17 00:34:27.351: INFO: Created: latency-svc-x6lrz May 17 00:34:27.368: INFO: Got endpoints: latency-svc-x6lrz [921.798261ms] May 17 00:34:27.458: INFO: Created: latency-svc-ksm6n May 17 00:34:27.462: INFO: Got endpoints: latency-svc-ksm6n [979.614042ms] May 17 00:34:27.488: INFO: Created: latency-svc-tcn8w May 17 00:34:27.500: INFO: Got endpoints: latency-svc-tcn8w [929.758189ms] May 17 00:34:27.518: INFO: Created: latency-svc-wmsmk May 17 00:34:27.531: INFO: Got endpoints: latency-svc-wmsmk [857.102376ms] May 17 00:34:27.548: INFO: Created: latency-svc-5r6g4 May 17 00:34:27.619: INFO: Got endpoints: latency-svc-5r6g4 [889.451395ms] May 17 00:34:27.645: INFO: Created: latency-svc-zp2ks May 17 00:34:27.669: INFO: Got endpoints: latency-svc-zp2ks [857.910704ms] May 17 00:34:27.713: INFO: Created: latency-svc-s5kgq May 17 00:34:27.775: INFO: Got endpoints: latency-svc-s5kgq [948.538153ms] May 17 00:34:27.795: INFO: Created: latency-svc-g5xlz May 17 00:34:27.821: INFO: Got endpoints: latency-svc-g5xlz [964.002653ms] May 17 00:34:27.850: INFO: Created: latency-svc-6gskn May 17 00:34:27.862: INFO: Got endpoints: latency-svc-6gskn [961.411965ms] May 17 00:34:27.914: INFO: Created: latency-svc-qn2pw May 17 00:34:27.917: INFO: Got endpoints: latency-svc-qn2pw [933.177253ms] May 17 00:34:27.969: INFO: Created: latency-svc-dhtpv May 17 00:34:27.984: INFO: Got endpoints: latency-svc-dhtpv [958.261406ms] May 17 00:34:28.056: INFO: Created: latency-svc-xmh7c May 17 00:34:28.061: INFO: Got endpoints: latency-svc-xmh7c [944.441069ms] May 17 00:34:28.112: INFO: Created: latency-svc-pf2ll May 17 00:34:28.136: INFO: Got endpoints: latency-svc-pf2ll [989.07571ms] May 17 00:34:28.221: INFO: Created: latency-svc-52p6t May 17 00:34:28.231: INFO: Got endpoints: latency-svc-52p6t [1.029318016s] May 17 00:34:28.251: INFO: Created: latency-svc-2pqcf May 17 00:34:28.273: INFO: Got endpoints: latency-svc-2pqcf [982.401692ms] May 17 00:34:28.380: INFO: Created: latency-svc-8bf6s May 17 00:34:28.399: INFO: Got endpoints: latency-svc-8bf6s [1.030913397s] May 17 00:34:28.437: INFO: Created: latency-svc-bmcdp May 17 00:34:28.453: INFO: Got endpoints: latency-svc-bmcdp [991.034744ms] May 17 00:34:28.536: INFO: Created: latency-svc-xw449 May 17 00:34:28.544: INFO: Got endpoints: latency-svc-xw449 [1.04354868s] May 17 00:34:28.574: INFO: Created: latency-svc-gr5gk May 17 00:34:28.584: INFO: Got endpoints: latency-svc-gr5gk [1.052308135s] May 17 00:34:28.612: INFO: Created: latency-svc-s4gzq May 17 00:34:28.620: INFO: Got endpoints: latency-svc-s4gzq [1.000853714s] May 17 00:34:28.674: INFO: Created: latency-svc-mzw7h May 17 00:34:28.687: INFO: Got endpoints: latency-svc-mzw7h [1.018196029s] May 17 00:34:28.750: INFO: Created: latency-svc-zpsqz May 17 00:34:28.836: INFO: Got endpoints: latency-svc-zpsqz [1.060213563s] May 17 00:34:28.851: INFO: Created: latency-svc-rbb4x May 17 00:34:28.876: INFO: Got endpoints: latency-svc-rbb4x [1.054369267s] May 17 00:34:28.876: INFO: Created: latency-svc-zc5j4 May 17 00:34:28.906: INFO: Got endpoints: latency-svc-zc5j4 [1.043847761s] May 17 00:34:28.934: INFO: Created: latency-svc-6dphq May 17 00:34:29.021: INFO: Got endpoints: latency-svc-6dphq [1.104546119s] May 17 00:34:29.024: INFO: Created: latency-svc-9f6t2 May 17 00:34:29.043: INFO: Got endpoints: latency-svc-9f6t2 [1.058819037s] May 17 00:34:29.079: INFO: Created: latency-svc-jjxnj May 17 00:34:29.103: INFO: Got endpoints: latency-svc-jjxnj [1.042164999s] May 17 00:34:29.165: INFO: Created: latency-svc-9xpkf May 17 00:34:29.192: INFO: Got endpoints: latency-svc-9xpkf [1.056052738s] May 17 00:34:29.193: INFO: Created: latency-svc-n4ppw May 17 00:34:29.229: INFO: Got endpoints: latency-svc-n4ppw [998.106292ms] May 17 00:34:29.259: INFO: Created: latency-svc-7xrj7 May 17 00:34:29.364: INFO: Got endpoints: latency-svc-7xrj7 [1.090857377s] May 17 00:34:29.414: INFO: Created: latency-svc-7g62q May 17 00:34:29.447: INFO: Got endpoints: latency-svc-7g62q [1.048264525s] May 17 00:34:29.512: INFO: Created: latency-svc-frpr8 May 17 00:34:29.518: INFO: Got endpoints: latency-svc-frpr8 [1.065065455s] May 17 00:34:29.559: INFO: Created: latency-svc-2wdd6 May 17 00:34:29.589: INFO: Got endpoints: latency-svc-2wdd6 [1.04545657s] May 17 00:34:29.662: INFO: Created: latency-svc-b7mlg May 17 00:34:29.667: INFO: Got endpoints: latency-svc-b7mlg [1.083524048s] May 17 00:34:29.696: INFO: Created: latency-svc-xl2p5 May 17 00:34:29.711: INFO: Got endpoints: latency-svc-xl2p5 [1.090907484s] May 17 00:34:29.750: INFO: Created: latency-svc-tkp5b May 17 00:34:29.818: INFO: Got endpoints: latency-svc-tkp5b [1.130016663s] May 17 00:34:29.819: INFO: Created: latency-svc-mm7d7 May 17 00:34:29.826: INFO: Got endpoints: latency-svc-mm7d7 [990.263261ms] May 17 00:34:29.841: INFO: Created: latency-svc-dncsl May 17 00:34:29.854: INFO: Got endpoints: latency-svc-dncsl [978.734278ms] May 17 00:34:29.871: INFO: Created: latency-svc-7ttqr May 17 00:34:29.885: INFO: Got endpoints: latency-svc-7ttqr [978.859953ms] May 17 00:34:29.907: INFO: Created: latency-svc-gckvz May 17 00:34:29.915: INFO: Got endpoints: latency-svc-gckvz [893.468636ms] May 17 00:34:29.961: INFO: Created: latency-svc-pnlcs May 17 00:34:29.965: INFO: Got endpoints: latency-svc-pnlcs [921.537975ms] May 17 00:34:29.991: INFO: Created: latency-svc-5vjdk May 17 00:34:30.006: INFO: Got endpoints: latency-svc-5vjdk [902.252576ms] May 17 00:34:30.027: INFO: Created: latency-svc-c5h66 May 17 00:34:30.042: INFO: Got endpoints: latency-svc-c5h66 [849.673794ms] May 17 00:34:30.057: INFO: Created: latency-svc-z5xs2 May 17 00:34:30.141: INFO: Got endpoints: latency-svc-z5xs2 [911.849285ms] May 17 00:34:30.164: INFO: Created: latency-svc-5x5hr May 17 00:34:30.181: INFO: Got endpoints: latency-svc-5x5hr [816.655365ms] May 17 00:34:30.200: INFO: Created: latency-svc-bdgq7 May 17 00:34:30.211: INFO: Got endpoints: latency-svc-bdgq7 [763.285899ms] May 17 00:34:30.296: INFO: Created: latency-svc-mcxpl May 17 00:34:30.299: INFO: Got endpoints: latency-svc-mcxpl [780.888574ms] May 17 00:34:30.331: INFO: Created: latency-svc-dp9mn May 17 00:34:30.342: INFO: Got endpoints: latency-svc-dp9mn [753.221313ms] May 17 00:34:30.362: INFO: Created: latency-svc-6wxhv May 17 00:34:30.374: INFO: Got endpoints: latency-svc-6wxhv [706.330235ms] May 17 00:34:30.392: INFO: Created: latency-svc-6nqg2 May 17 00:34:30.465: INFO: Got endpoints: latency-svc-6nqg2 [754.212189ms] May 17 00:34:30.468: INFO: Created: latency-svc-wbthm May 17 00:34:30.501: INFO: Got endpoints: latency-svc-wbthm [683.703184ms] May 17 00:34:30.531: INFO: Created: latency-svc-9ft6z May 17 00:34:30.544: INFO: Got endpoints: latency-svc-9ft6z [718.439333ms] May 17 00:34:30.560: INFO: Created: latency-svc-mqpjc May 17 00:34:30.632: INFO: Got endpoints: latency-svc-mqpjc [777.128239ms] May 17 00:34:30.638: INFO: Created: latency-svc-4q6qz May 17 00:34:30.663: INFO: Got endpoints: latency-svc-4q6qz [778.708227ms] May 17 00:34:30.688: INFO: Created: latency-svc-4hhw6 May 17 00:34:30.701: INFO: Got endpoints: latency-svc-4hhw6 [786.522418ms] May 17 00:34:30.717: INFO: Created: latency-svc-v8snb May 17 00:34:30.731: INFO: Got endpoints: latency-svc-v8snb [766.74332ms] May 17 00:34:30.787: INFO: Created: latency-svc-wvrb4 May 17 00:34:30.819: INFO: Got endpoints: latency-svc-wvrb4 [812.828312ms] May 17 00:34:30.860: INFO: Created: latency-svc-2ldhb May 17 00:34:30.876: INFO: Got endpoints: latency-svc-2ldhb [833.761621ms] May 17 00:34:30.931: INFO: Created: latency-svc-jjqkf May 17 00:34:30.942: INFO: Got endpoints: latency-svc-jjqkf [801.14728ms] May 17 00:34:30.957: INFO: Created: latency-svc-cbtbd May 17 00:34:31.017: INFO: Got endpoints: latency-svc-cbtbd [836.346645ms] May 17 00:34:31.077: INFO: Created: latency-svc-dnj6v May 17 00:34:31.100: INFO: Got endpoints: latency-svc-dnj6v [888.915068ms] May 17 00:34:31.125: INFO: Created: latency-svc-4p8nn May 17 00:34:31.156: INFO: Got endpoints: latency-svc-4p8nn [856.349337ms] May 17 00:34:31.225: INFO: Created: latency-svc-g9t8k May 17 00:34:31.231: INFO: Got endpoints: latency-svc-g9t8k [888.48911ms] May 17 00:34:31.275: INFO: Created: latency-svc-gjk9w May 17 00:34:31.285: INFO: Got endpoints: latency-svc-gjk9w [911.843467ms] May 17 00:34:31.389: INFO: Created: latency-svc-trbgk May 17 00:34:31.390: INFO: Got endpoints: latency-svc-trbgk [925.110671ms] May 17 00:34:31.419: INFO: Created: latency-svc-dh5gz May 17 00:34:31.448: INFO: Got endpoints: latency-svc-dh5gz [946.691139ms] May 17 00:34:31.485: INFO: Created: latency-svc-f57zq May 17 00:34:31.554: INFO: Got endpoints: latency-svc-f57zq [1.009727997s] May 17 00:34:31.559: INFO: Created: latency-svc-bg7lh May 17 00:34:31.582: INFO: Got endpoints: latency-svc-bg7lh [949.914614ms] May 17 00:34:31.604: INFO: Created: latency-svc-dlxmq May 17 00:34:31.629: INFO: Got endpoints: latency-svc-dlxmq [965.69365ms] May 17 00:34:31.728: INFO: Created: latency-svc-scjt6 May 17 00:34:31.741: INFO: Got endpoints: latency-svc-scjt6 [1.039269749s] May 17 00:34:31.760: INFO: Created: latency-svc-cdqtt May 17 00:34:31.774: INFO: Got endpoints: latency-svc-cdqtt [1.042640946s] May 17 00:34:31.872: INFO: Created: latency-svc-thpt7 May 17 00:34:31.899: INFO: Created: latency-svc-zbjph May 17 00:34:31.899: INFO: Got endpoints: latency-svc-thpt7 [1.080157824s] May 17 00:34:31.923: INFO: Got endpoints: latency-svc-zbjph [1.046804645s] May 17 00:34:31.947: INFO: Created: latency-svc-5qx5s May 17 00:34:31.961: INFO: Got endpoints: latency-svc-5qx5s [1.018342335s] May 17 00:34:32.033: INFO: Created: latency-svc-bdsgl May 17 00:34:32.044: INFO: Got endpoints: latency-svc-bdsgl [1.026758988s] May 17 00:34:32.067: INFO: Created: latency-svc-qcpvb May 17 00:34:32.080: INFO: Got endpoints: latency-svc-qcpvb [980.695863ms] May 17 00:34:32.096: INFO: Created: latency-svc-96hfd May 17 00:34:32.111: INFO: Got endpoints: latency-svc-96hfd [954.971785ms] May 17 00:34:32.189: INFO: Created: latency-svc-x2rtg May 17 00:34:32.192: INFO: Got endpoints: latency-svc-x2rtg [960.619415ms] May 17 00:34:32.234: INFO: Created: latency-svc-prmwx May 17 00:34:32.250: INFO: Got endpoints: latency-svc-prmwx [963.980671ms] May 17 00:34:32.282: INFO: Created: latency-svc-q6gtf May 17 00:34:32.357: INFO: Got endpoints: latency-svc-q6gtf [965.986006ms] May 17 00:34:32.373: INFO: Created: latency-svc-2842f May 17 00:34:32.388: INFO: Got endpoints: latency-svc-2842f [939.64891ms] May 17 00:34:32.409: INFO: Created: latency-svc-9nmll May 17 00:34:32.418: INFO: Got endpoints: latency-svc-9nmll [863.942221ms] May 17 00:34:32.450: INFO: Created: latency-svc-zxb7w May 17 00:34:32.506: INFO: Got endpoints: latency-svc-zxb7w [118.073198ms] May 17 00:34:32.535: INFO: Created: latency-svc-4xbcf May 17 00:34:32.548: INFO: Got endpoints: latency-svc-4xbcf [966.87927ms] May 17 00:34:32.577: INFO: Created: latency-svc-s5sjs May 17 00:34:32.591: INFO: Got endpoints: latency-svc-s5sjs [962.197454ms] May 17 00:34:32.650: INFO: Created: latency-svc-mzmxh May 17 00:34:32.659: INFO: Got endpoints: latency-svc-mzmxh [918.686405ms] May 17 00:34:32.684: INFO: Created: latency-svc-rs9mt May 17 00:34:32.708: INFO: Got endpoints: latency-svc-rs9mt [933.974341ms] May 17 00:34:32.733: INFO: Created: latency-svc-gmwpt May 17 00:34:32.741: INFO: Got endpoints: latency-svc-gmwpt [842.480164ms] May 17 00:34:32.811: INFO: Created: latency-svc-7bt6t May 17 00:34:32.815: INFO: Got endpoints: latency-svc-7bt6t [891.766519ms] May 17 00:34:32.848: INFO: Created: latency-svc-rrcfg May 17 00:34:33.021: INFO: Got endpoints: latency-svc-rrcfg [1.060052608s] May 17 00:34:33.026: INFO: Created: latency-svc-blv95 May 17 00:34:33.055: INFO: Got endpoints: latency-svc-blv95 [1.010944691s] May 17 00:34:33.087: INFO: Created: latency-svc-bqtdw May 17 00:34:33.189: INFO: Got endpoints: latency-svc-bqtdw [1.108323619s] May 17 00:34:33.192: INFO: Created: latency-svc-z952c May 17 00:34:33.205: INFO: Got endpoints: latency-svc-z952c [1.094188184s] May 17 00:34:33.284: INFO: Created: latency-svc-s2cf7 May 17 00:34:33.368: INFO: Got endpoints: latency-svc-s2cf7 [1.176659524s] May 17 00:34:33.388: INFO: Created: latency-svc-hppfh May 17 00:34:33.412: INFO: Got endpoints: latency-svc-hppfh [1.16260946s] May 17 00:34:33.459: INFO: Created: latency-svc-8rk29 May 17 00:34:33.524: INFO: Got endpoints: latency-svc-8rk29 [1.167686325s] May 17 00:34:33.526: INFO: Created: latency-svc-9cfsc May 17 00:34:33.532: INFO: Got endpoints: latency-svc-9cfsc [1.113896567s] May 17 00:34:33.580: INFO: Created: latency-svc-cbjms May 17 00:34:33.680: INFO: Got endpoints: latency-svc-cbjms [1.173937356s] May 17 00:34:33.693: INFO: Created: latency-svc-j62f6 May 17 00:34:33.707: INFO: Got endpoints: latency-svc-j62f6 [1.15828962s] May 17 00:34:33.743: INFO: Created: latency-svc-sdn2s May 17 00:34:33.755: INFO: Got endpoints: latency-svc-sdn2s [1.163793358s] May 17 00:34:33.823: INFO: Created: latency-svc-z865z May 17 00:34:33.849: INFO: Got endpoints: latency-svc-z865z [1.189828236s] May 17 00:34:33.887: INFO: Created: latency-svc-7prsm May 17 00:34:33.997: INFO: Got endpoints: latency-svc-7prsm [1.289099096s] May 17 00:34:33.999: INFO: Created: latency-svc-m4mmz May 17 00:34:34.020: INFO: Got endpoints: latency-svc-m4mmz [1.278751143s] May 17 00:34:34.054: INFO: Created: latency-svc-2tzqv May 17 00:34:34.068: INFO: Got endpoints: latency-svc-2tzqv [1.253375277s] May 17 00:34:34.090: INFO: Created: latency-svc-9htrp May 17 00:34:34.153: INFO: Got endpoints: latency-svc-9htrp [1.132071828s] May 17 00:34:34.173: INFO: Created: latency-svc-rlhlq May 17 00:34:34.189: INFO: Got endpoints: latency-svc-rlhlq [1.134042241s] May 17 00:34:34.222: INFO: Created: latency-svc-z68g8 May 17 00:34:34.237: INFO: Got endpoints: latency-svc-z68g8 [1.048556013s] May 17 00:34:34.290: INFO: Created: latency-svc-pf8p6 May 17 00:34:34.294: INFO: Got endpoints: latency-svc-pf8p6 [1.088886002s] May 17 00:34:34.347: INFO: Created: latency-svc-drxc9 May 17 00:34:34.371: INFO: Got endpoints: latency-svc-drxc9 [1.002841417s] May 17 00:34:34.446: INFO: Created: latency-svc-b928m May 17 00:34:34.453: INFO: Got endpoints: latency-svc-b928m [1.041242265s] May 17 00:34:34.492: INFO: Created: latency-svc-k7954 May 17 00:34:34.519: INFO: Got endpoints: latency-svc-k7954 [994.993951ms] May 17 00:34:34.659: INFO: Created: latency-svc-bfhtg May 17 00:34:34.901: INFO: Got endpoints: latency-svc-bfhtg [1.369207s] May 17 00:34:34.911: INFO: Created: latency-svc-xrmx5 May 17 00:34:34.928: INFO: Got endpoints: latency-svc-xrmx5 [1.24796831s] May 17 00:34:34.991: INFO: Created: latency-svc-wjcvx May 17 00:34:35.111: INFO: Got endpoints: latency-svc-wjcvx [1.403706221s] May 17 00:34:35.114: INFO: Created: latency-svc-szfgd May 17 00:34:35.120: INFO: Got endpoints: latency-svc-szfgd [1.364491783s] May 17 00:34:35.145: INFO: Created: latency-svc-hq78q May 17 00:34:35.188: INFO: Got endpoints: latency-svc-hq78q [1.338425119s] May 17 00:34:35.318: INFO: Created: latency-svc-sc89v May 17 00:34:35.320: INFO: Got endpoints: latency-svc-sc89v [1.323025944s] May 17 00:34:35.524: INFO: Created: latency-svc-4jprn May 17 00:34:35.527: INFO: Got endpoints: latency-svc-4jprn [1.507049809s] May 17 00:34:35.689: INFO: Created: latency-svc-sbtw6 May 17 00:34:35.698: INFO: Got endpoints: latency-svc-sbtw6 [1.629646478s] May 17 00:34:35.836: INFO: Created: latency-svc-z2bfd May 17 00:34:35.896: INFO: Created: latency-svc-rgpkw May 17 00:34:35.896: INFO: Got endpoints: latency-svc-z2bfd [1.742990805s] May 17 00:34:35.920: INFO: Got endpoints: latency-svc-rgpkw [1.730605238s] May 17 00:34:36.051: INFO: Created: latency-svc-cjg7r May 17 00:34:36.057: INFO: Got endpoints: latency-svc-cjg7r [1.819808693s] May 17 00:34:36.080: INFO: Created: latency-svc-7lmwn May 17 00:34:36.142: INFO: Got endpoints: latency-svc-7lmwn [1.84778299s] May 17 00:34:36.207: INFO: Created: latency-svc-jgjhw May 17 00:34:36.219: INFO: Got endpoints: latency-svc-jgjhw [1.847486412s] May 17 00:34:36.276: INFO: Created: latency-svc-vk4jd May 17 00:34:36.292: INFO: Got endpoints: latency-svc-vk4jd [1.838234754s] May 17 00:34:36.356: INFO: Created: latency-svc-9xlpt May 17 00:34:36.381: INFO: Got endpoints: latency-svc-9xlpt [1.861201612s] May 17 00:34:36.554: INFO: Created: latency-svc-rcvld May 17 00:34:36.562: INFO: Got endpoints: latency-svc-rcvld [1.660686126s] May 17 00:34:36.584: INFO: Created: latency-svc-m25jp May 17 00:34:36.629: INFO: Got endpoints: latency-svc-m25jp [1.701018729s] May 17 00:34:36.650: INFO: Created: latency-svc-twt6s May 17 00:34:36.739: INFO: Got endpoints: latency-svc-twt6s [1.628641337s] May 17 00:34:36.771: INFO: Created: latency-svc-vh5wc May 17 00:34:36.785: INFO: Got endpoints: latency-svc-vh5wc [1.664822106s] May 17 00:34:36.895: INFO: Created: latency-svc-c9l5k May 17 00:34:36.934: INFO: Got endpoints: latency-svc-c9l5k [1.746699798s] May 17 00:34:36.974: INFO: Created: latency-svc-zjjck May 17 00:34:37.057: INFO: Got endpoints: latency-svc-zjjck [1.736605426s] May 17 00:34:37.083: INFO: Created: latency-svc-2r6xb May 17 00:34:37.133: INFO: Got endpoints: latency-svc-2r6xb [1.605462897s] May 17 00:34:37.155: INFO: Created: latency-svc-6fqd5 May 17 00:34:37.250: INFO: Got endpoints: latency-svc-6fqd5 [1.55187753s] May 17 00:34:37.274: INFO: Created: latency-svc-l9nk7 May 17 00:34:37.331: INFO: Got endpoints: latency-svc-l9nk7 [1.434986016s] May 17 00:34:37.416: INFO: Created: latency-svc-xkq6q May 17 00:34:37.445: INFO: Got endpoints: latency-svc-xkq6q [1.525578453s] May 17 00:34:37.485: INFO: Created: latency-svc-qdnn7 May 17 00:34:37.560: INFO: Got endpoints: latency-svc-qdnn7 [1.50251435s] May 17 00:34:37.574: INFO: Created: latency-svc-brh67 May 17 00:34:37.626: INFO: Got endpoints: latency-svc-brh67 [1.484599075s] May 17 00:34:37.752: INFO: Created: latency-svc-5zf8g May 17 00:34:37.764: INFO: Got endpoints: latency-svc-5zf8g [1.545277263s] May 17 00:34:37.784: INFO: Created: latency-svc-dq6vz May 17 00:34:37.831: INFO: Got endpoints: latency-svc-dq6vz [1.538819735s] May 17 00:34:37.850: INFO: Created: latency-svc-w9lbh May 17 00:34:37.907: INFO: Got endpoints: latency-svc-w9lbh [1.526460317s] May 17 00:34:37.963: INFO: Created: latency-svc-9fgk5 May 17 00:34:37.980: INFO: Got endpoints: latency-svc-9fgk5 [1.417834513s] May 17 00:34:38.006: INFO: Created: latency-svc-kx5st May 17 00:34:38.099: INFO: Got endpoints: latency-svc-kx5st [1.469985317s] May 17 00:34:38.114: INFO: Created: latency-svc-rv88m May 17 00:34:38.130: INFO: Got endpoints: latency-svc-rv88m [1.391052105s] May 17 00:34:38.191: INFO: Created: latency-svc-4jmbp May 17 00:34:38.255: INFO: Got endpoints: latency-svc-4jmbp [1.470263777s] May 17 00:34:38.257: INFO: Created: latency-svc-79tmh May 17 00:34:38.294: INFO: Got endpoints: latency-svc-79tmh [1.359484166s] May 17 00:34:38.335: INFO: Created: latency-svc-z6rqr May 17 00:34:38.458: INFO: Got endpoints: latency-svc-z6rqr [1.401179694s] May 17 00:34:38.460: INFO: Created: latency-svc-chdvc May 17 00:34:38.515: INFO: Got endpoints: latency-svc-chdvc [1.38235607s] May 17 00:34:38.552: INFO: Created: latency-svc-mhgj9 May 17 00:34:38.638: INFO: Got endpoints: latency-svc-mhgj9 [1.387987147s] May 17 00:34:38.659: INFO: Created: latency-svc-pgcdt May 17 00:34:38.666: INFO: Got endpoints: latency-svc-pgcdt [1.334567779s] May 17 00:34:38.781: INFO: Created: latency-svc-xcktz May 17 00:34:38.784: INFO: Got endpoints: latency-svc-xcktz [1.339014494s] May 17 00:34:38.852: INFO: Created: latency-svc-pdmdd May 17 00:34:38.881: INFO: Got endpoints: latency-svc-pdmdd [1.321087362s] May 17 00:34:38.973: INFO: Created: latency-svc-2xx4t May 17 00:34:38.980: INFO: Got endpoints: latency-svc-2xx4t [1.353993043s] May 17 00:34:39.049: INFO: Created: latency-svc-kl559 May 17 00:34:39.065: INFO: Got endpoints: latency-svc-kl559 [1.300906317s] May 17 00:34:39.183: INFO: Created: latency-svc-8k74l May 17 00:34:39.191: INFO: Got endpoints: latency-svc-8k74l [1.360005494s] May 17 00:34:39.252: INFO: Created: latency-svc-59xrn May 17 00:34:39.269: INFO: Got endpoints: latency-svc-59xrn [1.361917141s] May 17 00:34:39.344: INFO: Created: latency-svc-lkrzj May 17 00:34:39.386: INFO: Got endpoints: latency-svc-lkrzj [1.406031947s] May 17 00:34:39.420: INFO: Created: latency-svc-pmzmv May 17 00:34:39.458: INFO: Got endpoints: latency-svc-pmzmv [1.359102218s] May 17 00:34:39.487: INFO: Created: latency-svc-7nk2t May 17 00:34:39.504: INFO: Got endpoints: latency-svc-7nk2t [1.373562522s] May 17 00:34:39.529: INFO: Created: latency-svc-9blnf May 17 00:34:39.547: INFO: Got endpoints: latency-svc-9blnf [1.292296142s] May 17 00:34:39.602: INFO: Created: latency-svc-2fcqv May 17 00:34:39.618: INFO: Got endpoints: latency-svc-2fcqv [1.324484108s] May 17 00:34:39.654: INFO: Created: latency-svc-jf5c2 May 17 00:34:39.667: INFO: Got endpoints: latency-svc-jf5c2 [1.208408357s] May 17 00:34:39.691: INFO: Created: latency-svc-dd9mh May 17 00:34:39.751: INFO: Got endpoints: latency-svc-dd9mh [1.236261988s] May 17 00:34:39.755: INFO: Created: latency-svc-gk9r9 May 17 00:34:39.763: INFO: Got endpoints: latency-svc-gk9r9 [1.12521588s] May 17 00:34:39.792: INFO: Created: latency-svc-t4vjp May 17 00:34:39.811: INFO: Got endpoints: latency-svc-t4vjp [1.145830946s] May 17 00:34:39.835: INFO: Created: latency-svc-74s4w May 17 00:34:39.847: INFO: Got endpoints: latency-svc-74s4w [1.062917683s] May 17 00:34:39.901: INFO: Created: latency-svc-5mz54 May 17 00:34:39.914: INFO: Got endpoints: latency-svc-5mz54 [1.032617428s] May 17 00:34:39.938: INFO: Created: latency-svc-rnt6l May 17 00:34:39.962: INFO: Got endpoints: latency-svc-rnt6l [981.619338ms] May 17 00:34:39.992: INFO: Created: latency-svc-9l6gf May 17 00:34:40.045: INFO: Got endpoints: latency-svc-9l6gf [979.803164ms] May 17 00:34:40.063: INFO: Created: latency-svc-vnpx5 May 17 00:34:40.076: INFO: Got endpoints: latency-svc-vnpx5 [885.668511ms] May 17 00:34:40.099: INFO: Created: latency-svc-csh8t May 17 00:34:40.113: INFO: Got endpoints: latency-svc-csh8t [843.794757ms] May 17 00:34:40.135: INFO: Created: latency-svc-jfl9r May 17 00:34:40.195: INFO: Got endpoints: latency-svc-jfl9r [808.431366ms] May 17 00:34:40.219: INFO: Created: latency-svc-7hhcp May 17 00:34:40.247: INFO: Got endpoints: latency-svc-7hhcp [788.736059ms] May 17 00:34:40.267: INFO: Created: latency-svc-9vs92 May 17 00:34:40.282: INFO: Got endpoints: latency-svc-9vs92 [777.806626ms] May 17 00:34:40.368: INFO: Created: latency-svc-nn5fb May 17 00:34:40.405: INFO: Got endpoints: latency-svc-nn5fb [858.294692ms] May 17 00:34:40.406: INFO: Created: latency-svc-xqqx9 May 17 00:34:40.429: INFO: Got endpoints: latency-svc-xqqx9 [810.904835ms] May 17 00:34:40.429: INFO: Latencies: [34.733407ms 108.493127ms 118.073198ms 143.027325ms 245.794042ms 269.661849ms 306.752418ms 383.45103ms 425.699013ms 515.789766ms 569.985028ms 606.24912ms 683.703184ms 694.003976ms 706.330235ms 718.439333ms 753.221313ms 754.212189ms 763.285899ms 766.74332ms 777.128239ms 777.806626ms 778.708227ms 780.888574ms 786.522418ms 788.736059ms 797.73283ms 801.14728ms 808.431366ms 810.904835ms 812.828312ms 816.655365ms 833.761621ms 836.346645ms 842.480164ms 843.794757ms 849.673794ms 853.414345ms 856.349337ms 857.102376ms 857.910704ms 858.294692ms 861.39743ms 863.942221ms 870.635173ms 879.867029ms 881.040726ms 885.668511ms 887.424452ms 888.48911ms 888.915068ms 889.451395ms 891.766519ms 893.468636ms 898.633734ms 899.675681ms 902.252576ms 911.843467ms 911.849285ms 915.920336ms 918.686405ms 921.537975ms 921.798261ms 925.110671ms 929.758189ms 933.177253ms 933.576605ms 933.974341ms 934.890699ms 939.64891ms 944.441069ms 946.691139ms 948.538153ms 949.914614ms 954.971785ms 958.261406ms 960.619415ms 961.411965ms 962.197454ms 963.980671ms 964.002653ms 965.69365ms 965.986006ms 966.87927ms 978.734278ms 978.859953ms 979.614042ms 979.803164ms 980.695863ms 981.619338ms 982.401692ms 989.07571ms 990.263261ms 991.034744ms 994.993951ms 998.106292ms 1.000853714s 1.002841417s 1.009727997s 1.010944691s 1.018196029s 1.018342335s 1.026758988s 1.029318016s 1.030913397s 1.032617428s 1.039269749s 1.041242265s 1.042164999s 1.042640946s 1.04354868s 1.043847761s 1.04545657s 1.046804645s 1.048264525s 1.048556013s 1.052308135s 1.054369267s 1.056052738s 1.058819037s 1.060052608s 1.060213563s 1.062917683s 1.065065455s 1.080157824s 1.083524048s 1.088886002s 1.090857377s 1.090907484s 1.094188184s 1.104546119s 1.108323619s 1.113896567s 1.12521588s 1.130016663s 1.132071828s 1.134042241s 1.145830946s 1.15828962s 1.16260946s 1.163793358s 1.167686325s 1.173937356s 1.176659524s 1.189828236s 1.208408357s 1.236261988s 1.24796831s 1.253375277s 1.278751143s 1.289099096s 1.292296142s 1.300906317s 1.321087362s 1.323025944s 1.324484108s 1.334567779s 1.338425119s 1.339014494s 1.353993043s 1.359102218s 1.359484166s 1.360005494s 1.361917141s 1.364491783s 1.369207s 1.373562522s 1.38235607s 1.387987147s 1.391052105s 1.401179694s 1.403706221s 1.406031947s 1.417834513s 1.434986016s 1.469985317s 1.470263777s 1.484599075s 1.50251435s 1.507049809s 1.525578453s 1.526460317s 1.538819735s 1.545277263s 1.55187753s 1.605462897s 1.628641337s 1.629646478s 1.660686126s 1.664822106s 1.701018729s 1.730605238s 1.736605426s 1.742990805s 1.746699798s 1.819808693s 1.838234754s 1.847486412s 1.84778299s 1.861201612s] May 17 00:34:40.430: INFO: 50 %ile: 1.018196029s May 17 00:34:40.430: INFO: 90 %ile: 1.525578453s May 17 00:34:40.430: INFO: 99 %ile: 1.84778299s May 17 00:34:40.430: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:34:40.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3136" for this suite. • [SLOW TEST:18.839 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":159,"skipped":2760,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:34:40.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 17 00:34:40.568: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 17 00:34:53.043: INFO: >>> kubeConfig: /root/.kube/config May 17 00:34:56.079: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:35:08.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-338" for this suite. • [SLOW TEST:28.190 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":160,"skipped":2766,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:35:08.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-5c279656-d4e7-4c0c-99e0-9559d5f75877 STEP: Creating a pod to test consume secrets May 17 00:35:08.850: INFO: Waiting up to 5m0s for pod "pod-secrets-2fb18ba5-896d-4206-9e8b-6ebb6859f181" in namespace "secrets-1218" to be "Succeeded or Failed" May 17 00:35:08.887: INFO: Pod "pod-secrets-2fb18ba5-896d-4206-9e8b-6ebb6859f181": Phase="Pending", Reason="", readiness=false. Elapsed: 37.313387ms May 17 00:35:11.034: INFO: Pod "pod-secrets-2fb18ba5-896d-4206-9e8b-6ebb6859f181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183482162s May 17 00:35:13.038: INFO: Pod "pod-secrets-2fb18ba5-896d-4206-9e8b-6ebb6859f181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187757632s STEP: Saw pod success May 17 00:35:13.038: INFO: Pod "pod-secrets-2fb18ba5-896d-4206-9e8b-6ebb6859f181" satisfied condition "Succeeded or Failed" May 17 00:35:13.041: INFO: Trying to get logs from node latest-worker pod pod-secrets-2fb18ba5-896d-4206-9e8b-6ebb6859f181 container secret-volume-test: STEP: delete the pod May 17 00:35:13.078: INFO: Waiting for pod pod-secrets-2fb18ba5-896d-4206-9e8b-6ebb6859f181 to disappear May 17 00:35:13.134: INFO: Pod pod-secrets-2fb18ba5-896d-4206-9e8b-6ebb6859f181 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:35:13.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1218" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2777,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:35:13.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 17 00:35:13.217: INFO: Waiting up to 5m0s for pod "downward-api-61d05795-14a0-4323-9cd0-53e855c44835" in namespace "downward-api-8701" to be "Succeeded or Failed" May 17 00:35:13.220: INFO: Pod "downward-api-61d05795-14a0-4323-9cd0-53e855c44835": Phase="Pending", Reason="", readiness=false. Elapsed: 3.044441ms May 17 00:35:15.261: INFO: Pod "downward-api-61d05795-14a0-4323-9cd0-53e855c44835": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043951452s May 17 00:35:17.266: INFO: Pod "downward-api-61d05795-14a0-4323-9cd0-53e855c44835": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048864122s STEP: Saw pod success May 17 00:35:17.266: INFO: Pod "downward-api-61d05795-14a0-4323-9cd0-53e855c44835" satisfied condition "Succeeded or Failed" May 17 00:35:17.270: INFO: Trying to get logs from node latest-worker pod downward-api-61d05795-14a0-4323-9cd0-53e855c44835 container dapi-container: STEP: delete the pod May 17 00:35:17.308: INFO: Waiting for pod downward-api-61d05795-14a0-4323-9cd0-53e855c44835 to disappear May 17 00:35:17.316: INFO: Pod downward-api-61d05795-14a0-4323-9cd0-53e855c44835 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:35:17.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8701" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:35:17.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:35:17.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2866' May 17 00:35:17.725: INFO: stderr: "" May 17 00:35:17.725: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 17 00:35:17.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2866' May 17 00:35:18.035: INFO: stderr: "" May 17 00:35:18.035: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 17 00:35:19.039: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:35:19.040: INFO: Found 0 / 1 May 17 00:35:20.040: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:35:20.040: INFO: Found 0 / 1 May 17 00:35:21.075: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:35:21.075: INFO: Found 0 / 1 May 17 00:35:22.040: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:35:22.040: INFO: Found 1 / 1 May 17 00:35:22.040: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 17 00:35:22.044: INFO: Selector matched 1 pods for map[app:agnhost] May 17 00:35:22.044: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 17 00:35:22.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-2gq5w --namespace=kubectl-2866' May 17 00:35:22.161: INFO: stderr: "" May 17 00:35:22.161: INFO: stdout: "Name: agnhost-master-2gq5w\nNamespace: kubectl-2866\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Sun, 17 May 2020 00:35:17 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.173\nIPs:\n IP: 10.244.1.173\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://7ebfa97df71a8fdb1ff783f192f24b1716d78aba22640e264b74655ab73d58be\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 17 May 2020 00:35:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tsg7v (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tsg7v:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tsg7v\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2866/agnhost-master-2gq5w to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" May 17 00:35:22.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2866' May 17 00:35:22.307: INFO: stderr: "" May 17 00:35:22.307: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2866\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-2gq5w\n" May 17 00:35:22.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2866' May 17 00:35:22.459: INFO: stderr: "" May 17 00:35:22.459: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2866\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.104.212.78\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.173:6379\nSession Affinity: None\nEvents: \n" May 17 00:35:22.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 17 00:35:22.608: INFO: stderr: "" May 17 00:35:22.608: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sun, 17 May 2020 00:35:14 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 17 May 2020 00:32:12 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 17 May 2020 00:32:12 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 17 May 2020 00:32:12 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 17 May 2020 00:32:12 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 17d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 17d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 17d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 17 00:35:22.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-2866' May 17 00:35:22.734: INFO: stderr: "" May 17 00:35:22.734: INFO: stdout: "Name: kubectl-2866\nLabels: e2e-framework=kubectl\n e2e-run=08629696-2499-4706-9fe8-af1fe331cacd\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:35:22.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2866" for this suite. • [SLOW TEST:5.419 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":163,"skipped":2849,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:35:22.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 17 00:35:22.852: INFO: Waiting up to 5m0s for pod "downward-api-4e79cf5d-00b8-4e2f-ab67-e2e1902cfcca" in namespace "downward-api-8334" to be "Succeeded or Failed" May 17 00:35:22.872: INFO: Pod "downward-api-4e79cf5d-00b8-4e2f-ab67-e2e1902cfcca": Phase="Pending", Reason="", readiness=false. Elapsed: 19.504892ms May 17 00:35:24.919: INFO: Pod "downward-api-4e79cf5d-00b8-4e2f-ab67-e2e1902cfcca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066853938s May 17 00:35:26.924: INFO: Pod "downward-api-4e79cf5d-00b8-4e2f-ab67-e2e1902cfcca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071794702s STEP: Saw pod success May 17 00:35:26.924: INFO: Pod "downward-api-4e79cf5d-00b8-4e2f-ab67-e2e1902cfcca" satisfied condition "Succeeded or Failed" May 17 00:35:26.927: INFO: Trying to get logs from node latest-worker2 pod downward-api-4e79cf5d-00b8-4e2f-ab67-e2e1902cfcca container dapi-container: STEP: delete the pod May 17 00:35:26.971: INFO: Waiting for pod downward-api-4e79cf5d-00b8-4e2f-ab67-e2e1902cfcca to disappear May 17 00:35:26.980: INFO: Pod downward-api-4e79cf5d-00b8-4e2f-ab67-e2e1902cfcca no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:35:26.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8334" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2867,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:35:26.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 17 00:35:27.100: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-a 4f19eeb3-f725-493f-9d8f-137b489b01f0 5291116 0 2020-05-17 00:35:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-17 00:35:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 17 00:35:27.101: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-a 4f19eeb3-f725-493f-9d8f-137b489b01f0 5291116 0 2020-05-17 00:35:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-17 00:35:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 17 00:35:37.109: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-a 4f19eeb3-f725-493f-9d8f-137b489b01f0 5291174 0 2020-05-17 00:35:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-17 00:35:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 17 00:35:37.110: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-a 4f19eeb3-f725-493f-9d8f-137b489b01f0 5291174 0 2020-05-17 00:35:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-17 00:35:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 17 00:35:47.118: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-a 4f19eeb3-f725-493f-9d8f-137b489b01f0 5291204 0 2020-05-17 00:35:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-17 00:35:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 17 00:35:47.118: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-a 4f19eeb3-f725-493f-9d8f-137b489b01f0 5291204 0 2020-05-17 00:35:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-17 00:35:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 17 00:35:57.128: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-a 4f19eeb3-f725-493f-9d8f-137b489b01f0 5291236 0 2020-05-17 00:35:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-17 00:35:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 17 00:35:57.129: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-a 4f19eeb3-f725-493f-9d8f-137b489b01f0 5291236 0 2020-05-17 00:35:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-17 00:35:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 17 00:36:07.137: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-b 63f01bb4-44c1-4de9-9419-45b4934e8d2f 5291264 0 2020-05-17 00:36:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-17 00:36:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 17 00:36:07.137: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-b 63f01bb4-44c1-4de9-9419-45b4934e8d2f 5291264 0 2020-05-17 00:36:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-17 00:36:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 17 00:36:17.144: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-b 63f01bb4-44c1-4de9-9419-45b4934e8d2f 5291294 0 2020-05-17 00:36:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-17 00:36:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 17 00:36:17.144: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2785 /api/v1/namespaces/watch-2785/configmaps/e2e-watch-test-configmap-b 63f01bb4-44c1-4de9-9419-45b4934e8d2f 5291294 0 2020-05-17 00:36:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-17 00:36:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:36:27.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2785" for this suite. • [SLOW TEST:60.169 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":165,"skipped":2874,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:36:27.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-714f5e38-9816-4bb7-935b-23f2c88645e4 in namespace container-probe-891 May 17 00:36:31.246: INFO: Started pod busybox-714f5e38-9816-4bb7-935b-23f2c88645e4 in namespace container-probe-891 STEP: checking the pod's current state and verifying that restartCount is present May 17 00:36:31.249: INFO: Initial restart count of pod busybox-714f5e38-9816-4bb7-935b-23f2c88645e4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:40:31.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-891" for this suite. • [SLOW TEST:244.699 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":166,"skipped":2894,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:40:31.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:40:31.900: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:40:32.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7592" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":167,"skipped":2906,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:40:32.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 17 00:40:33.375: INFO: Pod name wrapped-volume-race-728bd659-f730-49d6-ac0c-983139837dd5: Found 0 pods out of 5 May 17 00:40:38.399: INFO: Pod name wrapped-volume-race-728bd659-f730-49d6-ac0c-983139837dd5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-728bd659-f730-49d6-ac0c-983139837dd5 in namespace emptydir-wrapper-8377, will wait for the garbage collector to delete the pods May 17 00:40:52.494: INFO: Deleting ReplicationController wrapped-volume-race-728bd659-f730-49d6-ac0c-983139837dd5 took: 10.476065ms May 17 00:40:52.894: INFO: Terminating ReplicationController wrapped-volume-race-728bd659-f730-49d6-ac0c-983139837dd5 pods took: 400.336714ms STEP: Creating RC which spawns configmap-volume pods May 17 00:41:05.028: INFO: Pod name wrapped-volume-race-a014360d-b7ff-4ad8-8683-6ad94dd944a6: Found 0 pods out of 5 May 17 00:41:10.036: INFO: Pod name wrapped-volume-race-a014360d-b7ff-4ad8-8683-6ad94dd944a6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a014360d-b7ff-4ad8-8683-6ad94dd944a6 in namespace emptydir-wrapper-8377, will wait for the garbage collector to delete the pods May 17 00:41:22.554: INFO: Deleting ReplicationController wrapped-volume-race-a014360d-b7ff-4ad8-8683-6ad94dd944a6 took: 15.190833ms May 17 00:41:22.955: INFO: Terminating ReplicationController wrapped-volume-race-a014360d-b7ff-4ad8-8683-6ad94dd944a6 pods took: 400.243258ms STEP: Creating RC which spawns configmap-volume pods May 17 00:41:35.611: INFO: Pod name wrapped-volume-race-16e81683-4657-4d0a-92d9-b862772260da: Found 0 pods out of 5 May 17 00:41:40.619: INFO: Pod name wrapped-volume-race-16e81683-4657-4d0a-92d9-b862772260da: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-16e81683-4657-4d0a-92d9-b862772260da in namespace emptydir-wrapper-8377, will wait for the garbage collector to delete the pods May 17 00:41:54.817: INFO: Deleting ReplicationController wrapped-volume-race-16e81683-4657-4d0a-92d9-b862772260da took: 7.083225ms May 17 00:41:55.218: INFO: Terminating ReplicationController wrapped-volume-race-16e81683-4657-4d0a-92d9-b862772260da pods took: 400.259495ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:42:05.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8377" for this suite. • [SLOW TEST:93.430 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":168,"skipped":2911,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:42:05.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 17 00:42:06.093: INFO: Waiting up to 5m0s for pod "pod-87c65422-e1a1-412e-b26b-29c31bff5d62" in namespace "emptydir-1620" to be "Succeeded or Failed" May 17 00:42:06.110: INFO: Pod "pod-87c65422-e1a1-412e-b26b-29c31bff5d62": Phase="Pending", Reason="", readiness=false. Elapsed: 16.258604ms May 17 00:42:08.114: INFO: Pod "pod-87c65422-e1a1-412e-b26b-29c31bff5d62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020103371s May 17 00:42:10.118: INFO: Pod "pod-87c65422-e1a1-412e-b26b-29c31bff5d62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024972899s May 17 00:42:12.124: INFO: Pod "pod-87c65422-e1a1-412e-b26b-29c31bff5d62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030134136s STEP: Saw pod success May 17 00:42:12.124: INFO: Pod "pod-87c65422-e1a1-412e-b26b-29c31bff5d62" satisfied condition "Succeeded or Failed" May 17 00:42:12.130: INFO: Trying to get logs from node latest-worker pod pod-87c65422-e1a1-412e-b26b-29c31bff5d62 container test-container: STEP: delete the pod May 17 00:42:12.196: INFO: Waiting for pod pod-87c65422-e1a1-412e-b26b-29c31bff5d62 to disappear May 17 00:42:12.201: INFO: Pod pod-87c65422-e1a1-412e-b26b-29c31bff5d62 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:42:12.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1620" for this suite. • [SLOW TEST:6.287 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":2912,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:42:12.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-qsxnr in namespace proxy-2666 I0517 00:42:12.435052 7 runners.go:190] Created replication controller with name: proxy-service-qsxnr, namespace: proxy-2666, replica count: 1 I0517 00:42:13.485539 7 runners.go:190] proxy-service-qsxnr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:42:14.485808 7 runners.go:190] proxy-service-qsxnr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:42:15.486133 7 runners.go:190] proxy-service-qsxnr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:42:16.486387 7 runners.go:190] proxy-service-qsxnr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 00:42:17.486593 7 runners.go:190] proxy-service-qsxnr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 00:42:18.486834 7 runners.go:190] proxy-service-qsxnr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 00:42:18.490: INFO: setup took 6.134758604s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 17 00:42:18.500: INFO: (0) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 9.166471ms) May 17 00:42:18.500: INFO: (0) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 9.429301ms) May 17 00:42:18.500: INFO: (0) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 9.102367ms) May 17 00:42:18.502: INFO: (0) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 10.929495ms) May 17 00:42:18.502: INFO: (0) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 11.088394ms) May 17 00:42:18.502: INFO: (0) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 11.020534ms) May 17 00:42:18.503: INFO: (0) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 11.982818ms) May 17 00:42:18.503: INFO: (0) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 12.624095ms) May 17 00:42:18.505: INFO: (0) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 14.84846ms) May 17 00:42:18.505: INFO: (0) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 14.830183ms) May 17 00:42:18.505: INFO: (0) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 14.716388ms) May 17 00:42:18.508: INFO: (0) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 17.055668ms) May 17 00:42:18.508: INFO: (0) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 17.303912ms) May 17 00:42:18.508: INFO: (0) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 17.306207ms) May 17 00:42:18.508: INFO: (0) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 17.354505ms) May 17 00:42:18.508: INFO: (0) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: ... (200; 4.064247ms) May 17 00:42:18.512: INFO: (1) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 4.168662ms) May 17 00:42:18.512: INFO: (1) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 4.150716ms) May 17 00:42:18.513: INFO: (1) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 4.329818ms) May 17 00:42:18.513: INFO: (1) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.341728ms) May 17 00:42:18.513: INFO: (1) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 4.383394ms) May 17 00:42:18.513: INFO: (1) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.037679ms) May 17 00:42:18.514: INFO: (1) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test (200; 6.013258ms) May 17 00:42:18.515: INFO: (1) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.866876ms) May 17 00:42:18.522: INFO: (2) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 6.982887ms) May 17 00:42:18.522: INFO: (2) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 7.23592ms) May 17 00:42:18.522: INFO: (2) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 7.56268ms) May 17 00:42:18.523: INFO: (2) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 7.818661ms) May 17 00:42:18.523: INFO: (2) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 7.893236ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 9.124276ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 9.244209ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 9.170822ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 9.240862ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 9.337091ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 9.332532ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 9.516114ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 9.58293ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 9.66728ms) May 17 00:42:18.524: INFO: (2) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test<... (200; 4.574237ms) May 17 00:42:18.530: INFO: (3) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.548471ms) May 17 00:42:18.530: INFO: (3) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 4.898115ms) May 17 00:42:18.530: INFO: (3) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 5.488565ms) May 17 00:42:18.530: INFO: (3) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 5.387727ms) May 17 00:42:18.530: INFO: (3) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.426242ms) May 17 00:42:18.530: INFO: (3) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 5.422888ms) May 17 00:42:18.531: INFO: (3) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 5.744341ms) May 17 00:42:18.531: INFO: (3) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 5.69955ms) May 17 00:42:18.531: INFO: (3) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 5.735194ms) May 17 00:42:18.531: INFO: (3) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 5.791152ms) May 17 00:42:18.531: INFO: (3) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 6.004022ms) May 17 00:42:18.535: INFO: (4) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 4.358357ms) May 17 00:42:18.536: INFO: (4) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 4.72289ms) May 17 00:42:18.536: INFO: (4) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 4.752351ms) May 17 00:42:18.536: INFO: (4) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.86305ms) May 17 00:42:18.536: INFO: (4) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 4.851024ms) May 17 00:42:18.536: INFO: (4) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 4.961964ms) May 17 00:42:18.536: INFO: (4) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.179352ms) May 17 00:42:18.536: INFO: (4) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.293084ms) May 17 00:42:18.537: INFO: (4) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 5.519754ms) May 17 00:42:18.537: INFO: (4) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 5.53275ms) May 17 00:42:18.537: INFO: (4) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.740964ms) May 17 00:42:18.537: INFO: (4) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 5.747066ms) May 17 00:42:18.537: INFO: (4) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test<... (200; 6.025733ms) May 17 00:42:18.537: INFO: (4) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 5.99834ms) May 17 00:42:18.542: INFO: (5) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 4.872699ms) May 17 00:42:18.542: INFO: (5) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 4.944822ms) May 17 00:42:18.543: INFO: (5) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 5.294143ms) May 17 00:42:18.543: INFO: (5) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 5.249281ms) May 17 00:42:18.543: INFO: (5) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.275123ms) May 17 00:42:18.543: INFO: (5) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 5.274546ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 6.71031ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 6.848251ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 6.778859ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 6.751696ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 6.935046ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 6.944676ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 7.021067ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 7.094463ms) May 17 00:42:18.544: INFO: (5) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: ... (200; 2.849438ms) May 17 00:42:18.547: INFO: (6) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 2.961299ms) May 17 00:42:18.547: INFO: (6) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 2.898816ms) May 17 00:42:18.550: INFO: (6) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.538979ms) May 17 00:42:18.550: INFO: (6) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 5.737866ms) May 17 00:42:18.550: INFO: (6) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.583452ms) May 17 00:42:18.550: INFO: (6) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 5.975975ms) May 17 00:42:18.550: INFO: (6) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 5.78258ms) May 17 00:42:18.551: INFO: (6) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 5.796581ms) May 17 00:42:18.551: INFO: (6) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test<... (200; 6.194925ms) May 17 00:42:18.558: INFO: (7) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 6.912915ms) May 17 00:42:18.558: INFO: (7) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 6.893426ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 7.11239ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 7.080759ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 7.330225ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 7.227939ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test (200; 7.550522ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 7.455101ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 7.490544ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 7.52074ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 7.760177ms) May 17 00:42:18.559: INFO: (7) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 7.788977ms) May 17 00:42:18.567: INFO: (8) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 7.515191ms) May 17 00:42:18.567: INFO: (8) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 7.80146ms) May 17 00:42:18.567: INFO: (8) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 7.689609ms) May 17 00:42:18.567: INFO: (8) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 7.896244ms) May 17 00:42:18.567: INFO: (8) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test (200; 7.890164ms) May 17 00:42:18.567: INFO: (8) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 7.901567ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 8.039909ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 8.051093ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 8.173815ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 8.140519ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 8.172753ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 8.219526ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 8.173988ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 8.487585ms) May 17 00:42:18.568: INFO: (8) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 8.430607ms) May 17 00:42:18.571: INFO: (9) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 2.613454ms) May 17 00:42:18.571: INFO: (9) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 2.551125ms) May 17 00:42:18.571: INFO: (9) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 2.644425ms) May 17 00:42:18.573: INFO: (9) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.374128ms) May 17 00:42:18.573: INFO: (9) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 4.795597ms) May 17 00:42:18.573: INFO: (9) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 4.879817ms) May 17 00:42:18.573: INFO: (9) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 5.085215ms) May 17 00:42:18.573: INFO: (9) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: ... (200; 5.104603ms) May 17 00:42:18.573: INFO: (9) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 5.123025ms) May 17 00:42:18.574: INFO: (9) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 5.434843ms) May 17 00:42:18.574: INFO: (9) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 5.894339ms) May 17 00:42:18.574: INFO: (9) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 6.015439ms) May 17 00:42:18.574: INFO: (9) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 6.166964ms) May 17 00:42:18.574: INFO: (9) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 6.213323ms) May 17 00:42:18.574: INFO: (9) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 6.388085ms) May 17 00:42:18.579: INFO: (10) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 4.860983ms) May 17 00:42:18.579: INFO: (10) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 4.856714ms) May 17 00:42:18.579: INFO: (10) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 4.83703ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 4.86129ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.901214ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.936847ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.098833ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 5.599937ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 5.564436ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 5.701739ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 5.729473ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 5.773799ms) May 17 00:42:18.580: INFO: (10) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.645569ms) May 17 00:42:18.581: INFO: (10) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 6.023457ms) May 17 00:42:18.581: INFO: (10) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 5.919913ms) May 17 00:42:18.581: INFO: (10) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: ... (200; 3.548701ms) May 17 00:42:18.584: INFO: (11) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test (200; 3.707061ms) May 17 00:42:18.585: INFO: (11) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 3.733937ms) May 17 00:42:18.586: INFO: (11) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 5.212834ms) May 17 00:42:18.586: INFO: (11) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.06942ms) May 17 00:42:18.586: INFO: (11) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 5.175898ms) May 17 00:42:18.586: INFO: (11) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 5.252989ms) May 17 00:42:18.586: INFO: (11) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 5.376681ms) May 17 00:42:18.586: INFO: (11) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.52393ms) May 17 00:42:18.587: INFO: (11) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 5.817462ms) May 17 00:42:18.587: INFO: (11) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 5.891143ms) May 17 00:42:18.587: INFO: (11) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.822024ms) May 17 00:42:18.587: INFO: (11) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.838271ms) May 17 00:42:18.587: INFO: (11) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 5.822763ms) May 17 00:42:18.587: INFO: (11) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.889593ms) May 17 00:42:18.590: INFO: (12) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 3.170872ms) May 17 00:42:18.591: INFO: (12) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 3.987073ms) May 17 00:42:18.591: INFO: (12) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 3.967701ms) May 17 00:42:18.591: INFO: (12) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: ... (200; 4.42336ms) May 17 00:42:18.591: INFO: (12) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 4.339257ms) May 17 00:42:18.591: INFO: (12) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.436668ms) May 17 00:42:18.591: INFO: (12) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 4.481243ms) May 17 00:42:18.591: INFO: (12) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 4.462116ms) May 17 00:42:18.592: INFO: (12) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.782006ms) May 17 00:42:18.593: INFO: (12) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 6.338287ms) May 17 00:42:18.593: INFO: (12) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 6.351536ms) May 17 00:42:18.593: INFO: (12) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 6.292691ms) May 17 00:42:18.593: INFO: (12) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 6.537705ms) May 17 00:42:18.593: INFO: (12) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 6.6087ms) May 17 00:42:18.593: INFO: (12) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 6.626071ms) May 17 00:42:18.597: INFO: (13) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 3.267617ms) May 17 00:42:18.597: INFO: (13) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 3.792357ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 3.949375ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 3.93514ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 4.020045ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 4.058772ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 4.907417ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.909991ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 4.833642ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 4.862011ms) May 17 00:42:18.598: INFO: (13) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test<... (200; 3.109374ms) May 17 00:42:18.604: INFO: (14) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 4.404129ms) May 17 00:42:18.604: INFO: (14) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 4.533126ms) May 17 00:42:18.604: INFO: (14) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 4.913624ms) May 17 00:42:18.605: INFO: (14) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.178544ms) May 17 00:42:18.605: INFO: (14) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 5.457464ms) May 17 00:42:18.605: INFO: (14) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.313817ms) May 17 00:42:18.605: INFO: (14) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test (200; 5.752603ms) May 17 00:42:18.606: INFO: (14) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 6.345302ms) May 17 00:42:18.606: INFO: (14) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 6.374053ms) May 17 00:42:18.606: INFO: (14) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 6.71871ms) May 17 00:42:18.607: INFO: (14) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 7.168355ms) May 17 00:42:18.607: INFO: (14) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 7.214848ms) May 17 00:42:18.607: INFO: (14) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 7.44299ms) May 17 00:42:18.610: INFO: (15) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 2.946307ms) May 17 00:42:18.611: INFO: (15) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 4.194439ms) May 17 00:42:18.611: INFO: (15) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 4.246182ms) May 17 00:42:18.611: INFO: (15) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 4.391414ms) May 17 00:42:18.612: INFO: (15) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.574453ms) May 17 00:42:18.612: INFO: (15) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 4.632203ms) May 17 00:42:18.612: INFO: (15) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 4.695491ms) May 17 00:42:18.612: INFO: (15) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 4.782605ms) May 17 00:42:18.612: INFO: (15) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 4.795664ms) May 17 00:42:18.612: INFO: (15) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: ... (200; 4.911061ms) May 17 00:42:18.613: INFO: (15) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.85787ms) May 17 00:42:18.613: INFO: (15) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 5.927261ms) May 17 00:42:18.613: INFO: (15) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 6.099601ms) May 17 00:42:18.613: INFO: (15) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 5.995375ms) May 17 00:42:18.613: INFO: (15) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 6.238174ms) May 17 00:42:18.616: INFO: (16) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 2.422265ms) May 17 00:42:18.616: INFO: (16) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 2.82824ms) May 17 00:42:18.616: INFO: (16) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 2.811831ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.370676ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 5.43699ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.484742ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 5.472751ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test (200; 5.585796ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 5.622062ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 5.582415ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 5.653398ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.542807ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 5.833828ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 5.85337ms) May 17 00:42:18.619: INFO: (16) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.858483ms) May 17 00:42:18.621: INFO: (17) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 1.917644ms) May 17 00:42:18.624: INFO: (17) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test<... (200; 5.725032ms) May 17 00:42:18.625: INFO: (17) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 5.695599ms) May 17 00:42:18.625: INFO: (17) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 5.772436ms) May 17 00:42:18.625: INFO: (17) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.832277ms) May 17 00:42:18.625: INFO: (17) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 5.869983ms) May 17 00:42:18.625: INFO: (17) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 5.765052ms) May 17 00:42:18.625: INFO: (17) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 5.862082ms) May 17 00:42:18.625: INFO: (17) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 5.904702ms) May 17 00:42:18.625: INFO: (17) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 5.869543ms) May 17 00:42:18.626: INFO: (17) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 6.311829ms) May 17 00:42:18.629: INFO: (18) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 3.225237ms) May 17 00:42:18.629: INFO: (18) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 3.135136ms) May 17 00:42:18.629: INFO: (18) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 3.263125ms) May 17 00:42:18.629: INFO: (18) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 3.571263ms) May 17 00:42:18.629: INFO: (18) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: test<... (200; 3.893734ms) May 17 00:42:18.630: INFO: (18) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 4.006941ms) May 17 00:42:18.630: INFO: (18) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 4.032004ms) May 17 00:42:18.631: INFO: (18) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 4.988661ms) May 17 00:42:18.631: INFO: (18) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname1/proxy/: tls baz (200; 5.055008ms) May 17 00:42:18.631: INFO: (18) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 5.021299ms) May 17 00:42:18.631: INFO: (18) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname2/proxy/: bar (200; 5.078225ms) May 17 00:42:18.631: INFO: (18) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 5.058783ms) May 17 00:42:18.634: INFO: (19) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:462/proxy/: tls qux (200; 2.843948ms) May 17 00:42:18.634: INFO: (19) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 2.9138ms) May 17 00:42:18.634: INFO: (19) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:1080/proxy/: test<... (200; 2.954448ms) May 17 00:42:18.634: INFO: (19) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 3.561256ms) May 17 00:42:18.635: INFO: (19) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:160/proxy/: foo (200; 3.548445ms) May 17 00:42:18.635: INFO: (19) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk:162/proxy/: bar (200; 3.710979ms) May 17 00:42:18.635: INFO: (19) /api/v1/namespaces/proxy-2666/pods/proxy-service-qsxnr-jgjwk/proxy/: test (200; 3.896499ms) May 17 00:42:18.639: INFO: (19) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname1/proxy/: foo (200; 8.081416ms) May 17 00:42:18.641: INFO: (19) /api/v1/namespaces/proxy-2666/pods/http:proxy-service-qsxnr-jgjwk:1080/proxy/: ... (200; 10.131316ms) May 17 00:42:18.641: INFO: (19) /api/v1/namespaces/proxy-2666/services/http:proxy-service-qsxnr:portname1/proxy/: foo (200; 10.485812ms) May 17 00:42:18.641: INFO: (19) /api/v1/namespaces/proxy-2666/services/proxy-service-qsxnr:portname2/proxy/: bar (200; 10.502486ms) May 17 00:42:18.642: INFO: (19) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:460/proxy/: tls baz (200; 10.715189ms) May 17 00:42:18.642: INFO: (19) /api/v1/namespaces/proxy-2666/services/https:proxy-service-qsxnr:tlsportname2/proxy/: tls qux (200; 10.846001ms) May 17 00:42:18.642: INFO: (19) /api/v1/namespaces/proxy-2666/pods/https:proxy-service-qsxnr-jgjwk:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 17 00:42:21.396: INFO: Waiting up to 5m0s for pod "var-expansion-0c732fdf-99cc-490a-adc6-7a6fbf374d76" in namespace "var-expansion-7286" to be "Succeeded or Failed" May 17 00:42:21.463: INFO: Pod "var-expansion-0c732fdf-99cc-490a-adc6-7a6fbf374d76": Phase="Pending", Reason="", readiness=false. Elapsed: 67.350452ms May 17 00:42:23.467: INFO: Pod "var-expansion-0c732fdf-99cc-490a-adc6-7a6fbf374d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071378276s May 17 00:42:25.471: INFO: Pod "var-expansion-0c732fdf-99cc-490a-adc6-7a6fbf374d76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075162113s STEP: Saw pod success May 17 00:42:25.471: INFO: Pod "var-expansion-0c732fdf-99cc-490a-adc6-7a6fbf374d76" satisfied condition "Succeeded or Failed" May 17 00:42:25.473: INFO: Trying to get logs from node latest-worker pod var-expansion-0c732fdf-99cc-490a-adc6-7a6fbf374d76 container dapi-container: STEP: delete the pod May 17 00:42:25.517: INFO: Waiting for pod var-expansion-0c732fdf-99cc-490a-adc6-7a6fbf374d76 to disappear May 17 00:42:25.525: INFO: Pod var-expansion-0c732fdf-99cc-490a-adc6-7a6fbf374d76 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:42:25.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7286" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2947,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:42:25.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-ce29f97a-c463-42e1-8314-9032b03c4085 STEP: Creating a pod to test consume configMaps May 17 00:42:25.683: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbbb5fbf-f9d9-4ccd-8ef9-79af25e26a4d" in namespace "configmap-9580" to be "Succeeded or Failed" May 17 00:42:25.700: INFO: Pod "pod-configmaps-bbbb5fbf-f9d9-4ccd-8ef9-79af25e26a4d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.384227ms May 17 00:42:27.790: INFO: Pod "pod-configmaps-bbbb5fbf-f9d9-4ccd-8ef9-79af25e26a4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106207622s May 17 00:42:29.793: INFO: Pod "pod-configmaps-bbbb5fbf-f9d9-4ccd-8ef9-79af25e26a4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109578915s STEP: Saw pod success May 17 00:42:29.793: INFO: Pod "pod-configmaps-bbbb5fbf-f9d9-4ccd-8ef9-79af25e26a4d" satisfied condition "Succeeded or Failed" May 17 00:42:29.795: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-bbbb5fbf-f9d9-4ccd-8ef9-79af25e26a4d container configmap-volume-test: STEP: delete the pod May 17 00:42:29.883: INFO: Waiting for pod pod-configmaps-bbbb5fbf-f9d9-4ccd-8ef9-79af25e26a4d to disappear May 17 00:42:29.889: INFO: Pod pod-configmaps-bbbb5fbf-f9d9-4ccd-8ef9-79af25e26a4d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:42:29.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9580" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2960,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:42:29.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-cf37e993-dac3-4f81-826e-8d5f7492db04 STEP: Creating configMap with name cm-test-opt-upd-7b724944-9bf0-4bba-a4b4-a47f396ef68e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cf37e993-dac3-4f81-826e-8d5f7492db04 STEP: Updating configmap cm-test-opt-upd-7b724944-9bf0-4bba-a4b4-a47f396ef68e STEP: Creating configMap with name cm-test-opt-create-b3a8b2b6-34d4-4901-8ec5-c7e31bf3cccb STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:42:40.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7528" for this suite. • [SLOW TEST:10.555 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":2964,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:42:40.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-6c4be704-252c-4a0a-a3c0-60b2f542e038 STEP: Creating a pod to test consume secrets May 17 00:42:40.611: INFO: Waiting up to 5m0s for pod "pod-secrets-29df5a07-7565-4e8b-9e9f-6554f00885c0" in namespace "secrets-5638" to be "Succeeded or Failed" May 17 00:42:40.616: INFO: Pod "pod-secrets-29df5a07-7565-4e8b-9e9f-6554f00885c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.868154ms May 17 00:42:42.619: INFO: Pod "pod-secrets-29df5a07-7565-4e8b-9e9f-6554f00885c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008065767s May 17 00:42:44.623: INFO: Pod "pod-secrets-29df5a07-7565-4e8b-9e9f-6554f00885c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01222932s STEP: Saw pod success May 17 00:42:44.623: INFO: Pod "pod-secrets-29df5a07-7565-4e8b-9e9f-6554f00885c0" satisfied condition "Succeeded or Failed" May 17 00:42:44.626: INFO: Trying to get logs from node latest-worker pod pod-secrets-29df5a07-7565-4e8b-9e9f-6554f00885c0 container secret-volume-test: STEP: delete the pod May 17 00:42:44.660: INFO: Waiting for pod pod-secrets-29df5a07-7565-4e8b-9e9f-6554f00885c0 to disappear May 17 00:42:44.670: INFO: Pod pod-secrets-29df5a07-7565-4e8b-9e9f-6554f00885c0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:42:44.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5638" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2970,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:42:44.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2623 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 17 00:42:44.887: INFO: Found 0 stateful pods, waiting for 3 May 17 00:42:54.990: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 00:42:54.990: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 00:42:54.990: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 17 00:43:04.891: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 00:43:04.891: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 00:43:04.891: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 17 00:43:04.914: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 17 00:43:15.074: INFO: Updating stateful set ss2 May 17 00:43:15.143: INFO: Waiting for Pod statefulset-2623/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 17 00:43:25.751: INFO: Found 2 stateful pods, waiting for 3 May 17 00:43:35.757: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 00:43:35.757: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 00:43:35.757: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 17 00:43:35.784: INFO: Updating stateful set ss2 May 17 00:43:35.816: INFO: Waiting for Pod statefulset-2623/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 17 00:43:45.844: INFO: Updating stateful set ss2 May 17 00:43:45.853: INFO: Waiting for StatefulSet statefulset-2623/ss2 to complete update May 17 00:43:45.853: INFO: Waiting for Pod statefulset-2623/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 17 00:43:55.860: INFO: Deleting all statefulset in ns statefulset-2623 May 17 00:43:55.864: INFO: Scaling statefulset ss2 to 0 May 17 00:44:25.894: INFO: Waiting for statefulset status.replicas updated to 0 May 17 00:44:25.896: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:44:25.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2623" for this suite. • [SLOW TEST:101.186 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":175,"skipped":3005,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:44:25.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 17 00:44:26.185: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:26.224: INFO: Number of nodes with available pods: 0 May 17 00:44:26.224: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:27.228: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:27.232: INFO: Number of nodes with available pods: 0 May 17 00:44:27.232: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:28.345: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:28.559: INFO: Number of nodes with available pods: 0 May 17 00:44:28.559: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:29.229: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:29.233: INFO: Number of nodes with available pods: 0 May 17 00:44:29.233: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:30.230: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:30.234: INFO: Number of nodes with available pods: 0 May 17 00:44:30.234: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:31.244: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:31.278: INFO: Number of nodes with available pods: 2 May 17 00:44:31.278: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 17 00:44:31.316: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:31.333: INFO: Number of nodes with available pods: 1 May 17 00:44:31.333: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:32.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:32.344: INFO: Number of nodes with available pods: 1 May 17 00:44:32.344: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:33.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:33.344: INFO: Number of nodes with available pods: 1 May 17 00:44:33.344: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:34.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:34.344: INFO: Number of nodes with available pods: 1 May 17 00:44:34.344: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:35.338: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:35.342: INFO: Number of nodes with available pods: 1 May 17 00:44:35.342: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:36.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:36.343: INFO: Number of nodes with available pods: 1 May 17 00:44:36.344: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:37.370: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:37.381: INFO: Number of nodes with available pods: 1 May 17 00:44:37.381: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:38.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:38.343: INFO: Number of nodes with available pods: 1 May 17 00:44:38.344: INFO: Node latest-worker is running more than one daemon pod May 17 00:44:39.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:44:39.342: INFO: Number of nodes with available pods: 2 May 17 00:44:39.342: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6435, will wait for the garbage collector to delete the pods May 17 00:44:39.409: INFO: Deleting DaemonSet.extensions daemon-set took: 10.898722ms May 17 00:44:39.509: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.234562ms May 17 00:44:45.337: INFO: Number of nodes with available pods: 0 May 17 00:44:45.337: INFO: Number of running nodes: 0, number of available pods: 0 May 17 00:44:45.340: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6435/daemonsets","resourceVersion":"5294203"},"items":null} May 17 00:44:45.353: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6435/pods","resourceVersion":"5294204"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:44:45.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6435" for this suite. • [SLOW TEST:19.451 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":176,"skipped":3027,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:44:45.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:44:49.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6843" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":177,"skipped":3029,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:44:49.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 17 00:44:49.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8323' May 17 00:44:53.383: INFO: stderr: "" May 17 00:44:53.383: INFO: stdout: "pod/pause created\n" May 17 00:44:53.383: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 17 00:44:53.383: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8323" to be "running and ready" May 17 00:44:53.422: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.386605ms May 17 00:44:55.426: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042923346s May 17 00:44:57.430: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.047142451s May 17 00:44:57.430: INFO: Pod "pause" satisfied condition "running and ready" May 17 00:44:57.430: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 17 00:44:57.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8323' May 17 00:44:57.536: INFO: stderr: "" May 17 00:44:57.536: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 17 00:44:57.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8323' May 17 00:44:57.629: INFO: stderr: "" May 17 00:44:57.629: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 17 00:44:57.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8323' May 17 00:44:57.743: INFO: stderr: "" May 17 00:44:57.743: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 17 00:44:57.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8323' May 17 00:44:57.867: INFO: stderr: "" May 17 00:44:57.867: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 17 00:44:57.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8323' May 17 00:44:58.006: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 00:44:58.006: INFO: stdout: "pod \"pause\" force deleted\n" May 17 00:44:58.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8323' May 17 00:44:58.314: INFO: stderr: "No resources found in kubectl-8323 namespace.\n" May 17 00:44:58.314: INFO: stdout: "" May 17 00:44:58.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8323 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 17 00:44:58.411: INFO: stderr: "" May 17 00:44:58.411: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:44:58.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8323" for this suite. • [SLOW TEST:8.737 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":178,"skipped":3041,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:44:58.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:44:58.485: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a422b2bf-3875-4234-b000-4bf680b36d05" in namespace "security-context-test-6350" to be "Succeeded or Failed" May 17 00:44:58.491: INFO: Pod "busybox-user-65534-a422b2bf-3875-4234-b000-4bf680b36d05": Phase="Pending", Reason="", readiness=false. Elapsed: 5.592992ms May 17 00:45:00.496: INFO: Pod "busybox-user-65534-a422b2bf-3875-4234-b000-4bf680b36d05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010789479s May 17 00:45:02.559: INFO: Pod "busybox-user-65534-a422b2bf-3875-4234-b000-4bf680b36d05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073416036s May 17 00:45:02.559: INFO: Pod "busybox-user-65534-a422b2bf-3875-4234-b000-4bf680b36d05" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:45:02.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6350" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":179,"skipped":3044,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:45:02.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 17 00:45:06.737: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:45:06.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8678" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":180,"skipped":3045,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:45:06.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:45:07.110: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Pending, waiting for it to be Running (with Ready = true) May 17 00:45:09.116: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Pending, waiting for it to be Running (with Ready = true) May 17 00:45:11.114: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:13.114: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:15.116: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:17.114: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:19.113: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:21.115: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:23.115: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:25.114: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:27.114: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:29.115: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = false) May 17 00:45:31.114: INFO: The status of Pod test-webserver-dc60875c-c2f5-4e79-92eb-483e8a4074cd is Running (Ready = true) May 17 00:45:31.117: INFO: Container started at 2020-05-17 00:45:09 +0000 UTC, pod became ready at 2020-05-17 00:45:29 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:45:31.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-106" for this suite. • [SLOW TEST:24.212 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":181,"skipped":3057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:45:31.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8719 STEP: creating a selector STEP: Creating the service pods in kubernetes May 17 00:45:31.182: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 17 00:45:31.260: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 17 00:45:33.523: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 17 00:45:35.269: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:45:37.326: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:45:39.263: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:45:41.265: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:45:43.265: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:45:45.264: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:45:47.264: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:45:49.264: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:45:51.264: INFO: The status of Pod netserver-0 is Running (Ready = true) May 17 00:45:51.269: INFO: The status of Pod netserver-1 is Running (Ready = false) May 17 00:45:53.283: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 17 00:45:57.376: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.194 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8719 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:45:57.376: INFO: >>> kubeConfig: /root/.kube/config I0517 00:45:57.402965 7 log.go:172] (0xc0017f66e0) (0xc0012a8c80) Create stream I0517 00:45:57.402992 7 log.go:172] (0xc0017f66e0) (0xc0012a8c80) Stream added, broadcasting: 1 I0517 00:45:57.404752 7 log.go:172] (0xc0017f66e0) Reply frame received for 1 I0517 00:45:57.404798 7 log.go:172] (0xc0017f66e0) (0xc001be0780) Create stream I0517 00:45:57.404814 7 log.go:172] (0xc0017f66e0) (0xc001be0780) Stream added, broadcasting: 3 I0517 00:45:57.406106 7 log.go:172] (0xc0017f66e0) Reply frame received for 3 I0517 00:45:57.406150 7 log.go:172] (0xc0017f66e0) (0xc001be0820) Create stream I0517 00:45:57.406168 7 log.go:172] (0xc0017f66e0) (0xc001be0820) Stream added, broadcasting: 5 I0517 00:45:57.407196 7 log.go:172] (0xc0017f66e0) Reply frame received for 5 I0517 00:45:58.520569 7 log.go:172] (0xc0017f66e0) Data frame received for 3 I0517 00:45:58.520622 7 log.go:172] (0xc001be0780) (3) Data frame handling I0517 00:45:58.520656 7 log.go:172] (0xc001be0780) (3) Data frame sent I0517 00:45:58.520695 7 log.go:172] (0xc0017f66e0) Data frame received for 3 I0517 00:45:58.520740 7 log.go:172] (0xc001be0780) (3) Data frame handling I0517 00:45:58.520784 7 log.go:172] (0xc0017f66e0) Data frame received for 5 I0517 00:45:58.520808 7 log.go:172] (0xc001be0820) (5) Data frame handling I0517 00:45:58.523555 7 log.go:172] (0xc0017f66e0) Data frame received for 1 I0517 00:45:58.523585 7 log.go:172] (0xc0012a8c80) (1) Data frame handling I0517 00:45:58.523607 7 log.go:172] (0xc0012a8c80) (1) Data frame sent I0517 00:45:58.523641 7 log.go:172] (0xc0017f66e0) (0xc0012a8c80) Stream removed, broadcasting: 1 I0517 00:45:58.523689 7 log.go:172] (0xc0017f66e0) Go away received I0517 00:45:58.523799 7 log.go:172] (0xc0017f66e0) (0xc0012a8c80) Stream removed, broadcasting: 1 I0517 00:45:58.523834 7 log.go:172] (0xc0017f66e0) (0xc001be0780) Stream removed, broadcasting: 3 I0517 00:45:58.523850 7 log.go:172] (0xc0017f66e0) (0xc001be0820) Stream removed, broadcasting: 5 May 17 00:45:58.523: INFO: Found all expected endpoints: [netserver-0] May 17 00:45:58.527: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.237 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8719 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:45:58.527: INFO: >>> kubeConfig: /root/.kube/config I0517 00:45:58.563317 7 log.go:172] (0xc002cac420) (0xc0020dc280) Create stream I0517 00:45:58.563347 7 log.go:172] (0xc002cac420) (0xc0020dc280) Stream added, broadcasting: 1 I0517 00:45:58.565515 7 log.go:172] (0xc002cac420) Reply frame received for 1 I0517 00:45:58.565568 7 log.go:172] (0xc002cac420) (0xc0020dc500) Create stream I0517 00:45:58.565586 7 log.go:172] (0xc002cac420) (0xc0020dc500) Stream added, broadcasting: 3 I0517 00:45:58.566644 7 log.go:172] (0xc002cac420) Reply frame received for 3 I0517 00:45:58.566690 7 log.go:172] (0xc002cac420) (0xc0017f23c0) Create stream I0517 00:45:58.566708 7 log.go:172] (0xc002cac420) (0xc0017f23c0) Stream added, broadcasting: 5 I0517 00:45:58.567553 7 log.go:172] (0xc002cac420) Reply frame received for 5 I0517 00:45:59.655743 7 log.go:172] (0xc002cac420) Data frame received for 3 I0517 00:45:59.655775 7 log.go:172] (0xc0020dc500) (3) Data frame handling I0517 00:45:59.655792 7 log.go:172] (0xc0020dc500) (3) Data frame sent I0517 00:45:59.655997 7 log.go:172] (0xc002cac420) Data frame received for 3 I0517 00:45:59.656031 7 log.go:172] (0xc0020dc500) (3) Data frame handling I0517 00:45:59.656054 7 log.go:172] (0xc002cac420) Data frame received for 5 I0517 00:45:59.656066 7 log.go:172] (0xc0017f23c0) (5) Data frame handling I0517 00:45:59.657766 7 log.go:172] (0xc002cac420) Data frame received for 1 I0517 00:45:59.657782 7 log.go:172] (0xc0020dc280) (1) Data frame handling I0517 00:45:59.657798 7 log.go:172] (0xc0020dc280) (1) Data frame sent I0517 00:45:59.657919 7 log.go:172] (0xc002cac420) (0xc0020dc280) Stream removed, broadcasting: 1 I0517 00:45:59.657954 7 log.go:172] (0xc002cac420) Go away received I0517 00:45:59.658026 7 log.go:172] (0xc002cac420) (0xc0020dc280) Stream removed, broadcasting: 1 I0517 00:45:59.658052 7 log.go:172] (0xc002cac420) (0xc0020dc500) Stream removed, broadcasting: 3 I0517 00:45:59.658066 7 log.go:172] (0xc002cac420) (0xc0017f23c0) Stream removed, broadcasting: 5 May 17 00:45:59.658: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:45:59.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8719" for this suite. • [SLOW TEST:28.539 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":3085,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:45:59.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-653 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-653 STEP: Deleting pre-stop pod May 17 00:46:14.804: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:46:14.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-653" for this suite. • [SLOW TEST:15.181 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":183,"skipped":3106,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:46:14.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 17 00:46:19.449: INFO: Successfully updated pod "pod-update-c36e29b3-56e2-4ff3-9aa9-306fb4b36249" STEP: verifying the updated pod is in kubernetes May 17 00:46:19.460: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:46:19.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1816" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":184,"skipped":3119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:46:19.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 17 00:46:24.157: INFO: Successfully updated pod "annotationupdate1a7bd22f-849e-44a6-94f4-5e0440df1604" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:46:26.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6163" for this suite. • [SLOW TEST:6.719 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":185,"skipped":3187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:46:26.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1819c815-0b9f-4503-a9f2-7aeed5ba011b STEP: Creating a pod to test consume secrets May 17 00:46:26.341: INFO: Waiting up to 5m0s for pod "pod-secrets-ec1cca74-4bd1-4f0f-b2e6-49e25f5f741f" in namespace "secrets-1300" to be "Succeeded or Failed" May 17 00:46:26.343: INFO: Pod "pod-secrets-ec1cca74-4bd1-4f0f-b2e6-49e25f5f741f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453072ms May 17 00:46:28.348: INFO: Pod "pod-secrets-ec1cca74-4bd1-4f0f-b2e6-49e25f5f741f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007009172s May 17 00:46:30.353: INFO: Pod "pod-secrets-ec1cca74-4bd1-4f0f-b2e6-49e25f5f741f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012124558s STEP: Saw pod success May 17 00:46:30.353: INFO: Pod "pod-secrets-ec1cca74-4bd1-4f0f-b2e6-49e25f5f741f" satisfied condition "Succeeded or Failed" May 17 00:46:30.356: INFO: Trying to get logs from node latest-worker pod pod-secrets-ec1cca74-4bd1-4f0f-b2e6-49e25f5f741f container secret-volume-test: STEP: delete the pod May 17 00:46:30.401: INFO: Waiting for pod pod-secrets-ec1cca74-4bd1-4f0f-b2e6-49e25f5f741f to disappear May 17 00:46:30.410: INFO: Pod pod-secrets-ec1cca74-4bd1-4f0f-b2e6-49e25f5f741f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:46:30.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1300" for this suite. STEP: Destroying namespace "secret-namespace-3182" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":186,"skipped":3234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:46:30.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 17 00:46:35.618: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:46:35.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5100" for this suite. • [SLOW TEST:5.330 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":187,"skipped":3259,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:46:35.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:46:35.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e" in namespace "downward-api-9858" to be "Succeeded or Failed" May 17 00:46:35.961: INFO: Pod "downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e": Phase="Pending", Reason="", readiness=false. Elapsed: 47.408386ms May 17 00:46:38.013: INFO: Pod "downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099855112s May 17 00:46:40.018: INFO: Pod "downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10400609s May 17 00:46:42.021: INFO: Pod "downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107545531s STEP: Saw pod success May 17 00:46:42.021: INFO: Pod "downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e" satisfied condition "Succeeded or Failed" May 17 00:46:42.023: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e container client-container: STEP: delete the pod May 17 00:46:42.165: INFO: Waiting for pod downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e to disappear May 17 00:46:42.167: INFO: Pod downwardapi-volume-46b1df38-4fe0-456a-906b-d57be840cc7e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:46:42.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9858" for this suite. • [SLOW TEST:6.407 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":3260,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:46:42.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0517 00:46:43.608597 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 00:46:43.608: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:46:43.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9294" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":189,"skipped":3261,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:46:43.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 17 00:46:51.795: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 00:46:51.813: INFO: Pod pod-with-poststart-http-hook still exists May 17 00:46:53.814: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 00:46:53.818: INFO: Pod pod-with-poststart-http-hook still exists May 17 00:46:55.814: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 00:46:55.818: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:46:55.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5550" for this suite. • [SLOW TEST:12.211 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":190,"skipped":3271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:46:55.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:46:55.923: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 17 00:46:55.944: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:46:55.954: INFO: Number of nodes with available pods: 0 May 17 00:46:55.954: INFO: Node latest-worker is running more than one daemon pod May 17 00:46:56.959: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:46:56.962: INFO: Number of nodes with available pods: 0 May 17 00:46:56.962: INFO: Node latest-worker is running more than one daemon pod May 17 00:46:58.070: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:46:58.074: INFO: Number of nodes with available pods: 0 May 17 00:46:58.074: INFO: Node latest-worker is running more than one daemon pod May 17 00:46:58.959: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:46:58.962: INFO: Number of nodes with available pods: 0 May 17 00:46:58.962: INFO: Node latest-worker is running more than one daemon pod May 17 00:46:59.958: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:46:59.962: INFO: Number of nodes with available pods: 2 May 17 00:46:59.962: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 17 00:47:00.020: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:00.020: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:00.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:01.071: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:01.071: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:01.094: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:02.082: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:02.082: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:02.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:03.069: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:03.069: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:03.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:04.070: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:04.070: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:04.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:04.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:05.069: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:05.069: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:05.069: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:05.077: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:06.070: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:06.070: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:06.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:06.073: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:07.069: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:07.069: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:07.069: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:07.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:08.069: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:08.069: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:08.069: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:08.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:09.069: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:09.069: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:09.069: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:09.073: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:10.069: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:10.069: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:10.069: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:10.073: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:11.070: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:11.070: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:11.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:11.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:12.069: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:12.069: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:12.069: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:12.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:13.070: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:13.070: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:13.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:13.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:14.069: INFO: Wrong image for pod: daemon-set-nkg9d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:14.069: INFO: Pod daemon-set-nkg9d is not available May 17 00:47:14.069: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:14.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:15.068: INFO: Pod daemon-set-gvjg9 is not available May 17 00:47:15.068: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:15.072: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:16.070: INFO: Pod daemon-set-gvjg9 is not available May 17 00:47:16.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:16.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:17.118: INFO: Pod daemon-set-gvjg9 is not available May 17 00:47:17.118: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:17.123: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:18.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:18.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:19.068: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:19.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:20.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:20.070: INFO: Pod daemon-set-w9x7w is not available May 17 00:47:20.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:21.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:21.070: INFO: Pod daemon-set-w9x7w is not available May 17 00:47:21.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:22.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:22.070: INFO: Pod daemon-set-w9x7w is not available May 17 00:47:22.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:23.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:23.070: INFO: Pod daemon-set-w9x7w is not available May 17 00:47:23.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:24.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:24.070: INFO: Pod daemon-set-w9x7w is not available May 17 00:47:24.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:25.070: INFO: Wrong image for pod: daemon-set-w9x7w. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 17 00:47:25.070: INFO: Pod daemon-set-w9x7w is not available May 17 00:47:25.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:26.070: INFO: Pod daemon-set-r5jxf is not available May 17 00:47:26.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 17 00:47:26.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:26.082: INFO: Number of nodes with available pods: 1 May 17 00:47:26.082: INFO: Node latest-worker2 is running more than one daemon pod May 17 00:47:27.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:27.091: INFO: Number of nodes with available pods: 1 May 17 00:47:27.091: INFO: Node latest-worker2 is running more than one daemon pod May 17 00:47:28.096: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:28.123: INFO: Number of nodes with available pods: 1 May 17 00:47:28.123: INFO: Node latest-worker2 is running more than one daemon pod May 17 00:47:29.088: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:47:29.091: INFO: Number of nodes with available pods: 2 May 17 00:47:29.091: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6522, will wait for the garbage collector to delete the pods May 17 00:47:29.164: INFO: Deleting DaemonSet.extensions daemon-set took: 6.654178ms May 17 00:47:29.464: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.237806ms May 17 00:47:35.284: INFO: Number of nodes with available pods: 0 May 17 00:47:35.284: INFO: Number of running nodes: 0, number of available pods: 0 May 17 00:47:35.286: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6522/daemonsets","resourceVersion":"5295295"},"items":null} May 17 00:47:35.289: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6522/pods","resourceVersion":"5295295"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:47:35.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6522" for this suite. • [SLOW TEST:39.480 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":191,"skipped":3298,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:47:35.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6418 STEP: creating service affinity-nodeport in namespace services-6418 STEP: creating replication controller affinity-nodeport in namespace services-6418 I0517 00:47:35.472360 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6418, replica count: 3 I0517 00:47:38.522751 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:47:41.523007 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 00:47:41.532: INFO: Creating new exec pod May 17 00:47:46.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6418 execpod-affinitygrznd -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 17 00:47:46.801: INFO: stderr: "I0517 00:47:46.695384 3878 log.go:172] (0xc000987080) (0xc000abc320) Create stream\nI0517 00:47:46.695456 3878 log.go:172] (0xc000987080) (0xc000abc320) Stream added, broadcasting: 1\nI0517 00:47:46.699886 3878 log.go:172] (0xc000987080) Reply frame received for 1\nI0517 00:47:46.699919 3878 log.go:172] (0xc000987080) (0xc00052c3c0) Create stream\nI0517 00:47:46.699925 3878 log.go:172] (0xc000987080) (0xc00052c3c0) Stream added, broadcasting: 3\nI0517 00:47:46.700769 3878 log.go:172] (0xc000987080) Reply frame received for 3\nI0517 00:47:46.700799 3878 log.go:172] (0xc000987080) (0xc0004dc6e0) Create stream\nI0517 00:47:46.700808 3878 log.go:172] (0xc000987080) (0xc0004dc6e0) Stream added, broadcasting: 5\nI0517 00:47:46.702096 3878 log.go:172] (0xc000987080) Reply frame received for 5\nI0517 00:47:46.794258 3878 log.go:172] (0xc000987080) Data frame received for 5\nI0517 00:47:46.794280 3878 log.go:172] (0xc0004dc6e0) (5) Data frame handling\nI0517 00:47:46.794291 3878 log.go:172] (0xc0004dc6e0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0517 00:47:46.794535 3878 log.go:172] (0xc000987080) Data frame received for 5\nI0517 00:47:46.794554 3878 log.go:172] (0xc0004dc6e0) (5) Data frame handling\nI0517 00:47:46.794565 3878 log.go:172] (0xc0004dc6e0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0517 00:47:46.794837 3878 log.go:172] (0xc000987080) Data frame received for 5\nI0517 00:47:46.794851 3878 log.go:172] (0xc0004dc6e0) (5) Data frame handling\nI0517 00:47:46.794977 3878 log.go:172] (0xc000987080) Data frame received for 3\nI0517 00:47:46.794994 3878 log.go:172] (0xc00052c3c0) (3) Data frame handling\nI0517 00:47:46.796789 3878 log.go:172] (0xc000987080) Data frame received for 1\nI0517 00:47:46.796820 3878 log.go:172] (0xc000abc320) (1) Data frame handling\nI0517 00:47:46.796856 3878 log.go:172] (0xc000abc320) (1) Data frame sent\nI0517 00:47:46.796891 3878 log.go:172] (0xc000987080) (0xc000abc320) Stream removed, broadcasting: 1\nI0517 00:47:46.797268 3878 log.go:172] (0xc000987080) Go away received\nI0517 00:47:46.797451 3878 log.go:172] (0xc000987080) (0xc000abc320) Stream removed, broadcasting: 1\nI0517 00:47:46.797481 3878 log.go:172] (0xc000987080) (0xc00052c3c0) Stream removed, broadcasting: 3\nI0517 00:47:46.797497 3878 log.go:172] (0xc000987080) (0xc0004dc6e0) Stream removed, broadcasting: 5\n" May 17 00:47:46.801: INFO: stdout: "" May 17 00:47:46.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6418 execpod-affinitygrznd -- /bin/sh -x -c nc -zv -t -w 2 10.108.170.12 80' May 17 00:47:47.032: INFO: stderr: "I0517 00:47:46.950471 3898 log.go:172] (0xc00003a420) (0xc00084fae0) Create stream\nI0517 00:47:46.950540 3898 log.go:172] (0xc00003a420) (0xc00084fae0) Stream added, broadcasting: 1\nI0517 00:47:46.952750 3898 log.go:172] (0xc00003a420) Reply frame received for 1\nI0517 00:47:46.952803 3898 log.go:172] (0xc00003a420) (0xc000562c80) Create stream\nI0517 00:47:46.952838 3898 log.go:172] (0xc00003a420) (0xc000562c80) Stream added, broadcasting: 3\nI0517 00:47:46.956759 3898 log.go:172] (0xc00003a420) Reply frame received for 3\nI0517 00:47:46.956812 3898 log.go:172] (0xc00003a420) (0xc000142140) Create stream\nI0517 00:47:46.956832 3898 log.go:172] (0xc00003a420) (0xc000142140) Stream added, broadcasting: 5\nI0517 00:47:46.958074 3898 log.go:172] (0xc00003a420) Reply frame received for 5\nI0517 00:47:47.025040 3898 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:47:47.025075 3898 log.go:172] (0xc000142140) (5) Data frame handling\nI0517 00:47:47.025088 3898 log.go:172] (0xc000142140) (5) Data frame sent\nI0517 00:47:47.025104 3898 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:47:47.025253 3898 log.go:172] (0xc000142140) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.170.12 80\nConnection to 10.108.170.12 80 port [tcp/http] succeeded!\nI0517 00:47:47.025278 3898 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:47:47.025285 3898 log.go:172] (0xc000562c80) (3) Data frame handling\nI0517 00:47:47.027103 3898 log.go:172] (0xc00003a420) Data frame received for 1\nI0517 00:47:47.027148 3898 log.go:172] (0xc00084fae0) (1) Data frame handling\nI0517 00:47:47.027169 3898 log.go:172] (0xc00084fae0) (1) Data frame sent\nI0517 00:47:47.027194 3898 log.go:172] (0xc00003a420) (0xc00084fae0) Stream removed, broadcasting: 1\nI0517 00:47:47.027218 3898 log.go:172] (0xc00003a420) Go away received\nI0517 00:47:47.027658 3898 log.go:172] (0xc00003a420) (0xc00084fae0) Stream removed, broadcasting: 1\nI0517 00:47:47.027684 3898 log.go:172] (0xc00003a420) (0xc000562c80) Stream removed, broadcasting: 3\nI0517 00:47:47.027698 3898 log.go:172] (0xc00003a420) (0xc000142140) Stream removed, broadcasting: 5\n" May 17 00:47:47.032: INFO: stdout: "" May 17 00:47:47.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6418 execpod-affinitygrznd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31141' May 17 00:47:47.235: INFO: stderr: "I0517 00:47:47.171247 3919 log.go:172] (0xc00003a420) (0xc000682d20) Create stream\nI0517 00:47:47.171346 3919 log.go:172] (0xc00003a420) (0xc000682d20) Stream added, broadcasting: 1\nI0517 00:47:47.179795 3919 log.go:172] (0xc00003a420) Reply frame received for 1\nI0517 00:47:47.179831 3919 log.go:172] (0xc00003a420) (0xc000526dc0) Create stream\nI0517 00:47:47.179840 3919 log.go:172] (0xc00003a420) (0xc000526dc0) Stream added, broadcasting: 3\nI0517 00:47:47.182429 3919 log.go:172] (0xc00003a420) Reply frame received for 3\nI0517 00:47:47.182474 3919 log.go:172] (0xc00003a420) (0xc0000dd900) Create stream\nI0517 00:47:47.182482 3919 log.go:172] (0xc00003a420) (0xc0000dd900) Stream added, broadcasting: 5\nI0517 00:47:47.183165 3919 log.go:172] (0xc00003a420) Reply frame received for 5\nI0517 00:47:47.229659 3919 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:47:47.229701 3919 log.go:172] (0xc0000dd900) (5) Data frame handling\nI0517 00:47:47.229747 3919 log.go:172] (0xc0000dd900) (5) Data frame sent\nI0517 00:47:47.229771 3919 log.go:172] (0xc00003a420) Data frame received for 5\nI0517 00:47:47.229782 3919 log.go:172] (0xc0000dd900) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31141\nConnection to 172.17.0.13 31141 port [tcp/31141] succeeded!\nI0517 00:47:47.229812 3919 log.go:172] (0xc00003a420) Data frame received for 3\nI0517 00:47:47.229833 3919 log.go:172] (0xc000526dc0) (3) Data frame handling\nI0517 00:47:47.230980 3919 log.go:172] (0xc00003a420) Data frame received for 1\nI0517 00:47:47.231014 3919 log.go:172] (0xc000682d20) (1) Data frame handling\nI0517 00:47:47.231031 3919 log.go:172] (0xc000682d20) (1) Data frame sent\nI0517 00:47:47.231050 3919 log.go:172] (0xc00003a420) (0xc000682d20) Stream removed, broadcasting: 1\nI0517 00:47:47.231087 3919 log.go:172] (0xc00003a420) Go away received\nI0517 00:47:47.231533 3919 log.go:172] (0xc00003a420) (0xc000682d20) Stream removed, broadcasting: 1\nI0517 00:47:47.231549 3919 log.go:172] (0xc00003a420) (0xc000526dc0) Stream removed, broadcasting: 3\nI0517 00:47:47.231557 3919 log.go:172] (0xc00003a420) (0xc0000dd900) Stream removed, broadcasting: 5\n" May 17 00:47:47.235: INFO: stdout: "" May 17 00:47:47.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6418 execpod-affinitygrznd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31141' May 17 00:47:47.495: INFO: stderr: "I0517 00:47:47.371744 3939 log.go:172] (0xc000959340) (0xc000a8e780) Create stream\nI0517 00:47:47.371814 3939 log.go:172] (0xc000959340) (0xc000a8e780) Stream added, broadcasting: 1\nI0517 00:47:47.376512 3939 log.go:172] (0xc000959340) Reply frame received for 1\nI0517 00:47:47.376548 3939 log.go:172] (0xc000959340) (0xc000520f00) Create stream\nI0517 00:47:47.376556 3939 log.go:172] (0xc000959340) (0xc000520f00) Stream added, broadcasting: 3\nI0517 00:47:47.377743 3939 log.go:172] (0xc000959340) Reply frame received for 3\nI0517 00:47:47.377786 3939 log.go:172] (0xc000959340) (0xc00056e320) Create stream\nI0517 00:47:47.377798 3939 log.go:172] (0xc000959340) (0xc00056e320) Stream added, broadcasting: 5\nI0517 00:47:47.378629 3939 log.go:172] (0xc000959340) Reply frame received for 5\nI0517 00:47:47.490300 3939 log.go:172] (0xc000959340) Data frame received for 5\nI0517 00:47:47.490323 3939 log.go:172] (0xc00056e320) (5) Data frame handling\nI0517 00:47:47.490330 3939 log.go:172] (0xc00056e320) (5) Data frame sent\nI0517 00:47:47.490334 3939 log.go:172] (0xc000959340) Data frame received for 5\nI0517 00:47:47.490338 3939 log.go:172] (0xc00056e320) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31141\nConnection to 172.17.0.12 31141 port [tcp/31141] succeeded!\nI0517 00:47:47.490350 3939 log.go:172] (0xc000959340) Data frame received for 3\nI0517 00:47:47.490354 3939 log.go:172] (0xc000520f00) (3) Data frame handling\nI0517 00:47:47.491456 3939 log.go:172] (0xc000959340) Data frame received for 1\nI0517 00:47:47.491474 3939 log.go:172] (0xc000a8e780) (1) Data frame handling\nI0517 00:47:47.491490 3939 log.go:172] (0xc000a8e780) (1) Data frame sent\nI0517 00:47:47.491505 3939 log.go:172] (0xc000959340) (0xc000a8e780) Stream removed, broadcasting: 1\nI0517 00:47:47.491538 3939 log.go:172] (0xc000959340) Go away received\nI0517 00:47:47.491777 3939 log.go:172] (0xc000959340) (0xc000a8e780) Stream removed, broadcasting: 1\nI0517 00:47:47.491794 3939 log.go:172] (0xc000959340) (0xc000520f00) Stream removed, broadcasting: 3\nI0517 00:47:47.491804 3939 log.go:172] (0xc000959340) (0xc00056e320) Stream removed, broadcasting: 5\n" May 17 00:47:47.495: INFO: stdout: "" May 17 00:47:47.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6418 execpod-affinitygrznd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31141/ ; done' May 17 00:47:47.785: INFO: stderr: "I0517 00:47:47.619272 3959 log.go:172] (0xc000954d10) (0xc000a785a0) Create stream\nI0517 00:47:47.619310 3959 log.go:172] (0xc000954d10) (0xc000a785a0) Stream added, broadcasting: 1\nI0517 00:47:47.623140 3959 log.go:172] (0xc000954d10) Reply frame received for 1\nI0517 00:47:47.623167 3959 log.go:172] (0xc000954d10) (0xc00083a140) Create stream\nI0517 00:47:47.623177 3959 log.go:172] (0xc000954d10) (0xc00083a140) Stream added, broadcasting: 3\nI0517 00:47:47.623856 3959 log.go:172] (0xc000954d10) Reply frame received for 3\nI0517 00:47:47.623897 3959 log.go:172] (0xc000954d10) (0xc00052afa0) Create stream\nI0517 00:47:47.623907 3959 log.go:172] (0xc000954d10) (0xc00052afa0) Stream added, broadcasting: 5\nI0517 00:47:47.624540 3959 log.go:172] (0xc000954d10) Reply frame received for 5\nI0517 00:47:47.676920 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.676958 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.676982 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.677016 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.677046 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.677059 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.683897 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.683929 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.683952 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.684666 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.684684 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.684695 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.684707 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.684717 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.684729 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.691875 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.691906 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.691928 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.692463 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.692483 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.692507 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.692530 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.692546 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.692558 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.698093 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.698112 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.698121 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.698128 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.698134 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.698145 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.698152 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.698159 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.698179 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.704184 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.704202 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.704227 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.704675 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.704687 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.704700 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.704717 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.704727 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.704746 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.712344 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.712361 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.712379 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.712972 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.713005 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.713026 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.713053 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.713067 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.713090 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.716512 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.716532 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.716549 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.717312 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.717449 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.717469 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.717522 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.717532 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.717556 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.723059 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.723086 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.723113 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.723452 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.723473 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.723484 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -sI0517 00:47:47.723504 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.723519 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.723538 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.723550 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.723556 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.723573 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.727444 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.727465 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.727482 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.727808 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.727840 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.727855 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.727878 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.727900 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.727917 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.734287 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.734308 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.734323 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.735506 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.735527 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.735560 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.735596 3959 log.go:172] (0xc00083a140) (3) Data frame handling\n+ I0517 00:47:47.735617 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.735646 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\nI0517 00:47:47.735680 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.735703 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.735735 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.743795 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.743813 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.743830 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.744633 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.744673 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.744694 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.744721 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.744739 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.744764 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.748361 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.748399 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.748434 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.748920 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.748932 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.748938 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.748946 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.748951 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.748972 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\nI0517 00:47:47.748980 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.748984 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.748994 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\nI0517 00:47:47.755130 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.755156 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.755175 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.755881 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.755895 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.755905 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.755914 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.755919 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.755926 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.762258 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.762277 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.762292 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.762898 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.762922 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.762938 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.763018 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.763048 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.763076 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.767468 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.767496 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.767522 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.767665 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.767688 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.767697 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.767712 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.767721 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.767729 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.772938 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.772961 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.772990 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.773669 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.773700 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.773712 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.773729 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.773764 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.773793 3959 log.go:172] (0xc00052afa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31141/\nI0517 00:47:47.776838 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.776856 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.776875 3959 log.go:172] (0xc00083a140) (3) Data frame sent\nI0517 00:47:47.777334 3959 log.go:172] (0xc000954d10) Data frame received for 3\nI0517 00:47:47.777365 3959 log.go:172] (0xc00083a140) (3) Data frame handling\nI0517 00:47:47.777538 3959 log.go:172] (0xc000954d10) Data frame received for 5\nI0517 00:47:47.777555 3959 log.go:172] (0xc00052afa0) (5) Data frame handling\nI0517 00:47:47.779329 3959 log.go:172] (0xc000954d10) Data frame received for 1\nI0517 00:47:47.779372 3959 log.go:172] (0xc000a785a0) (1) Data frame handling\nI0517 00:47:47.779397 3959 log.go:172] (0xc000a785a0) (1) Data frame sent\nI0517 00:47:47.779416 3959 log.go:172] (0xc000954d10) (0xc000a785a0) Stream removed, broadcasting: 1\nI0517 00:47:47.779433 3959 log.go:172] (0xc000954d10) Go away received\nI0517 00:47:47.779861 3959 log.go:172] (0xc000954d10) (0xc000a785a0) Stream removed, broadcasting: 1\nI0517 00:47:47.779895 3959 log.go:172] (0xc000954d10) (0xc00083a140) Stream removed, broadcasting: 3\nI0517 00:47:47.779916 3959 log.go:172] (0xc000954d10) (0xc00052afa0) Stream removed, broadcasting: 5\n" May 17 00:47:47.786: INFO: stdout: "\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp\naffinity-nodeport-95hrp" May 17 00:47:47.786: INFO: Received response from host: May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Received response from host: affinity-nodeport-95hrp May 17 00:47:47.786: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6418, will wait for the garbage collector to delete the pods May 17 00:47:47.858: INFO: Deleting ReplicationController affinity-nodeport took: 6.055503ms May 17 00:47:48.358: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.24333ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:47:55.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6418" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.038 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":192,"skipped":3304,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:47:55.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6974 STEP: creating a selector STEP: Creating the service pods in kubernetes May 17 00:47:55.398: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 17 00:47:55.511: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 17 00:47:57.515: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 17 00:47:59.527: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:01.516: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:03.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:05.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:07.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:09.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:11.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:13.516: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:15.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:48:17.515: INFO: The status of Pod netserver-0 is Running (Ready = true) May 17 00:48:17.520: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 17 00:48:21.554: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.206:8080/dial?request=hostname&protocol=udp&host=10.244.1.205&port=8081&tries=1'] Namespace:pod-network-test-6974 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:48:21.554: INFO: >>> kubeConfig: /root/.kube/config I0517 00:48:21.584300 7 log.go:172] (0xc002cac630) (0xc0005dc000) Create stream I0517 00:48:21.584325 7 log.go:172] (0xc002cac630) (0xc0005dc000) Stream added, broadcasting: 1 I0517 00:48:21.586145 7 log.go:172] (0xc002cac630) Reply frame received for 1 I0517 00:48:21.586173 7 log.go:172] (0xc002cac630) (0xc0005125a0) Create stream I0517 00:48:21.586181 7 log.go:172] (0xc002cac630) (0xc0005125a0) Stream added, broadcasting: 3 I0517 00:48:21.587109 7 log.go:172] (0xc002cac630) Reply frame received for 3 I0517 00:48:21.587146 7 log.go:172] (0xc002cac630) (0xc0005dc500) Create stream I0517 00:48:21.587174 7 log.go:172] (0xc002cac630) (0xc0005dc500) Stream added, broadcasting: 5 I0517 00:48:21.588322 7 log.go:172] (0xc002cac630) Reply frame received for 5 I0517 00:48:21.684192 7 log.go:172] (0xc002cac630) Data frame received for 3 I0517 00:48:21.684214 7 log.go:172] (0xc0005125a0) (3) Data frame handling I0517 00:48:21.684227 7 log.go:172] (0xc0005125a0) (3) Data frame sent I0517 00:48:21.684861 7 log.go:172] (0xc002cac630) Data frame received for 3 I0517 00:48:21.684898 7 log.go:172] (0xc0005125a0) (3) Data frame handling I0517 00:48:21.684951 7 log.go:172] (0xc002cac630) Data frame received for 5 I0517 00:48:21.684970 7 log.go:172] (0xc0005dc500) (5) Data frame handling I0517 00:48:21.687047 7 log.go:172] (0xc002cac630) Data frame received for 1 I0517 00:48:21.687081 7 log.go:172] (0xc0005dc000) (1) Data frame handling I0517 00:48:21.687106 7 log.go:172] (0xc0005dc000) (1) Data frame sent I0517 00:48:21.687173 7 log.go:172] (0xc002cac630) (0xc0005dc000) Stream removed, broadcasting: 1 I0517 00:48:21.687211 7 log.go:172] (0xc002cac630) Go away received I0517 00:48:21.687320 7 log.go:172] (0xc002cac630) (0xc0005dc000) Stream removed, broadcasting: 1 I0517 00:48:21.687349 7 log.go:172] (0xc002cac630) (0xc0005125a0) Stream removed, broadcasting: 3 I0517 00:48:21.687366 7 log.go:172] (0xc002cac630) (0xc0005dc500) Stream removed, broadcasting: 5 May 17 00:48:21.687: INFO: Waiting for responses: map[] May 17 00:48:21.690: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.206:8080/dial?request=hostname&protocol=udp&host=10.244.2.249&port=8081&tries=1'] Namespace:pod-network-test-6974 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:48:21.690: INFO: >>> kubeConfig: /root/.kube/config I0517 00:48:21.714284 7 log.go:172] (0xc0017f6bb0) (0xc0002c9400) Create stream I0517 00:48:21.714309 7 log.go:172] (0xc0017f6bb0) (0xc0002c9400) Stream added, broadcasting: 1 I0517 00:48:21.715865 7 log.go:172] (0xc0017f6bb0) Reply frame received for 1 I0517 00:48:21.715894 7 log.go:172] (0xc0017f6bb0) (0xc0010dc1e0) Create stream I0517 00:48:21.715909 7 log.go:172] (0xc0017f6bb0) (0xc0010dc1e0) Stream added, broadcasting: 3 I0517 00:48:21.716744 7 log.go:172] (0xc0017f6bb0) Reply frame received for 3 I0517 00:48:21.716776 7 log.go:172] (0xc0017f6bb0) (0xc0002c9540) Create stream I0517 00:48:21.716786 7 log.go:172] (0xc0017f6bb0) (0xc0002c9540) Stream added, broadcasting: 5 I0517 00:48:21.717842 7 log.go:172] (0xc0017f6bb0) Reply frame received for 5 I0517 00:48:21.790264 7 log.go:172] (0xc0017f6bb0) Data frame received for 3 I0517 00:48:21.790310 7 log.go:172] (0xc0010dc1e0) (3) Data frame handling I0517 00:48:21.790331 7 log.go:172] (0xc0010dc1e0) (3) Data frame sent I0517 00:48:21.790523 7 log.go:172] (0xc0017f6bb0) Data frame received for 5 I0517 00:48:21.790542 7 log.go:172] (0xc0002c9540) (5) Data frame handling I0517 00:48:21.790576 7 log.go:172] (0xc0017f6bb0) Data frame received for 3 I0517 00:48:21.790591 7 log.go:172] (0xc0010dc1e0) (3) Data frame handling I0517 00:48:21.791755 7 log.go:172] (0xc0017f6bb0) Data frame received for 1 I0517 00:48:21.791797 7 log.go:172] (0xc0002c9400) (1) Data frame handling I0517 00:48:21.791823 7 log.go:172] (0xc0002c9400) (1) Data frame sent I0517 00:48:21.791875 7 log.go:172] (0xc0017f6bb0) (0xc0002c9400) Stream removed, broadcasting: 1 I0517 00:48:21.791907 7 log.go:172] (0xc0017f6bb0) Go away received I0517 00:48:21.792059 7 log.go:172] (0xc0017f6bb0) (0xc0002c9400) Stream removed, broadcasting: 1 I0517 00:48:21.792108 7 log.go:172] (0xc0017f6bb0) (0xc0010dc1e0) Stream removed, broadcasting: 3 I0517 00:48:21.792147 7 log.go:172] (0xc0017f6bb0) (0xc0002c9540) Stream removed, broadcasting: 5 May 17 00:48:21.792: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:48:21.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6974" for this suite. • [SLOW TEST:26.454 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3304,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:48:21.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:48:21.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f184e454-595c-4c69-884b-1ca3118379b8" in namespace "downward-api-3855" to be "Succeeded or Failed" May 17 00:48:21.873: INFO: Pod "downwardapi-volume-f184e454-595c-4c69-884b-1ca3118379b8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.367385ms May 17 00:48:23.889: INFO: Pod "downwardapi-volume-f184e454-595c-4c69-884b-1ca3118379b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031256125s May 17 00:48:25.894: INFO: Pod "downwardapi-volume-f184e454-595c-4c69-884b-1ca3118379b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035817654s STEP: Saw pod success May 17 00:48:25.894: INFO: Pod "downwardapi-volume-f184e454-595c-4c69-884b-1ca3118379b8" satisfied condition "Succeeded or Failed" May 17 00:48:25.896: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f184e454-595c-4c69-884b-1ca3118379b8 container client-container: STEP: delete the pod May 17 00:48:26.067: INFO: Waiting for pod downwardapi-volume-f184e454-595c-4c69-884b-1ca3118379b8 to disappear May 17 00:48:26.108: INFO: Pod downwardapi-volume-f184e454-595c-4c69-884b-1ca3118379b8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:48:26.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3855" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":194,"skipped":3305,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:48:26.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1388 STEP: creating service affinity-clusterip-transition in namespace services-1388 STEP: creating replication controller affinity-clusterip-transition in namespace services-1388 I0517 00:48:26.235367 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1388, replica count: 3 I0517 00:48:29.285789 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:48:32.286077 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 00:48:32.291: INFO: Creating new exec pod May 17 00:48:37.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1388 execpod-affinityr89k5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 17 00:48:37.563: INFO: stderr: "I0517 00:48:37.447751 3981 log.go:172] (0xc000b4cfd0) (0xc0006fc960) Create stream\nI0517 00:48:37.447791 3981 log.go:172] (0xc000b4cfd0) (0xc0006fc960) Stream added, broadcasting: 1\nI0517 00:48:37.456104 3981 log.go:172] (0xc000b4cfd0) Reply frame received for 1\nI0517 00:48:37.456223 3981 log.go:172] (0xc000b4cfd0) (0xc00051b400) Create stream\nI0517 00:48:37.456284 3981 log.go:172] (0xc000b4cfd0) (0xc00051b400) Stream added, broadcasting: 3\nI0517 00:48:37.458505 3981 log.go:172] (0xc000b4cfd0) Reply frame received for 3\nI0517 00:48:37.458530 3981 log.go:172] (0xc000b4cfd0) (0xc0004a6780) Create stream\nI0517 00:48:37.458543 3981 log.go:172] (0xc000b4cfd0) (0xc0004a6780) Stream added, broadcasting: 5\nI0517 00:48:37.460055 3981 log.go:172] (0xc000b4cfd0) Reply frame received for 5\nI0517 00:48:37.540528 3981 log.go:172] (0xc000b4cfd0) Data frame received for 5\nI0517 00:48:37.540566 3981 log.go:172] (0xc0004a6780) (5) Data frame handling\nI0517 00:48:37.540590 3981 log.go:172] (0xc0004a6780) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0517 00:48:37.557727 3981 log.go:172] (0xc000b4cfd0) Data frame received for 5\nI0517 00:48:37.557760 3981 log.go:172] (0xc0004a6780) (5) Data frame handling\nI0517 00:48:37.557774 3981 log.go:172] (0xc0004a6780) (5) Data frame sent\nI0517 00:48:37.557784 3981 log.go:172] (0xc000b4cfd0) Data frame received for 5\nI0517 00:48:37.557794 3981 log.go:172] (0xc0004a6780) (5) Data frame handling\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0517 00:48:37.557891 3981 log.go:172] (0xc000b4cfd0) Data frame received for 3\nI0517 00:48:37.557903 3981 log.go:172] (0xc00051b400) (3) Data frame handling\nI0517 00:48:37.559378 3981 log.go:172] (0xc000b4cfd0) Data frame received for 1\nI0517 00:48:37.559407 3981 log.go:172] (0xc0006fc960) (1) Data frame handling\nI0517 00:48:37.559426 3981 log.go:172] (0xc0006fc960) (1) Data frame sent\nI0517 00:48:37.559457 3981 log.go:172] (0xc000b4cfd0) (0xc0006fc960) Stream removed, broadcasting: 1\nI0517 00:48:37.559499 3981 log.go:172] (0xc000b4cfd0) Go away received\nI0517 00:48:37.559752 3981 log.go:172] (0xc000b4cfd0) (0xc0006fc960) Stream removed, broadcasting: 1\nI0517 00:48:37.559779 3981 log.go:172] (0xc000b4cfd0) (0xc00051b400) Stream removed, broadcasting: 3\nI0517 00:48:37.559790 3981 log.go:172] (0xc000b4cfd0) (0xc0004a6780) Stream removed, broadcasting: 5\n" May 17 00:48:37.564: INFO: stdout: "" May 17 00:48:37.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1388 execpod-affinityr89k5 -- /bin/sh -x -c nc -zv -t -w 2 10.100.153.3 80' May 17 00:48:37.755: INFO: stderr: "I0517 00:48:37.688871 4001 log.go:172] (0xc000a12840) (0xc000697900) Create stream\nI0517 00:48:37.688927 4001 log.go:172] (0xc000a12840) (0xc000697900) Stream added, broadcasting: 1\nI0517 00:48:37.691769 4001 log.go:172] (0xc000a12840) Reply frame received for 1\nI0517 00:48:37.691814 4001 log.go:172] (0xc000a12840) (0xc00050c460) Create stream\nI0517 00:48:37.691828 4001 log.go:172] (0xc000a12840) (0xc00050c460) Stream added, broadcasting: 3\nI0517 00:48:37.692665 4001 log.go:172] (0xc000a12840) Reply frame received for 3\nI0517 00:48:37.692704 4001 log.go:172] (0xc000a12840) (0xc0006595e0) Create stream\nI0517 00:48:37.692713 4001 log.go:172] (0xc000a12840) (0xc0006595e0) Stream added, broadcasting: 5\nI0517 00:48:37.693724 4001 log.go:172] (0xc000a12840) Reply frame received for 5\nI0517 00:48:37.750728 4001 log.go:172] (0xc000a12840) Data frame received for 5\nI0517 00:48:37.750769 4001 log.go:172] (0xc000a12840) Data frame received for 3\nI0517 00:48:37.750786 4001 log.go:172] (0xc00050c460) (3) Data frame handling\nI0517 00:48:37.750801 4001 log.go:172] (0xc0006595e0) (5) Data frame handling\nI0517 00:48:37.750810 4001 log.go:172] (0xc0006595e0) (5) Data frame sent\nI0517 00:48:37.750815 4001 log.go:172] (0xc000a12840) Data frame received for 5\nI0517 00:48:37.750820 4001 log.go:172] (0xc0006595e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.153.3 80\nConnection to 10.100.153.3 80 port [tcp/http] succeeded!\nI0517 00:48:37.752168 4001 log.go:172] (0xc000a12840) Data frame received for 1\nI0517 00:48:37.752184 4001 log.go:172] (0xc000697900) (1) Data frame handling\nI0517 00:48:37.752192 4001 log.go:172] (0xc000697900) (1) Data frame sent\nI0517 00:48:37.752207 4001 log.go:172] (0xc000a12840) (0xc000697900) Stream removed, broadcasting: 1\nI0517 00:48:37.752244 4001 log.go:172] (0xc000a12840) Go away received\nI0517 00:48:37.752473 4001 log.go:172] (0xc000a12840) (0xc000697900) Stream removed, broadcasting: 1\nI0517 00:48:37.752489 4001 log.go:172] (0xc000a12840) (0xc00050c460) Stream removed, broadcasting: 3\nI0517 00:48:37.752507 4001 log.go:172] (0xc000a12840) (0xc0006595e0) Stream removed, broadcasting: 5\n" May 17 00:48:37.756: INFO: stdout: "" May 17 00:48:37.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1388 execpod-affinityr89k5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.153.3:80/ ; done' May 17 00:48:38.062: INFO: stderr: "I0517 00:48:37.887550 4020 log.go:172] (0xc000bb4d10) (0xc00025b2c0) Create stream\nI0517 00:48:37.887594 4020 log.go:172] (0xc000bb4d10) (0xc00025b2c0) Stream added, broadcasting: 1\nI0517 00:48:37.890538 4020 log.go:172] (0xc000bb4d10) Reply frame received for 1\nI0517 00:48:37.890591 4020 log.go:172] (0xc000bb4d10) (0xc0005ba780) Create stream\nI0517 00:48:37.890605 4020 log.go:172] (0xc000bb4d10) (0xc0005ba780) Stream added, broadcasting: 3\nI0517 00:48:37.891584 4020 log.go:172] (0xc000bb4d10) Reply frame received for 3\nI0517 00:48:37.891601 4020 log.go:172] (0xc000bb4d10) (0xc00015cfa0) Create stream\nI0517 00:48:37.891607 4020 log.go:172] (0xc000bb4d10) (0xc00015cfa0) Stream added, broadcasting: 5\nI0517 00:48:37.892489 4020 log.go:172] (0xc000bb4d10) Reply frame received for 5\nI0517 00:48:37.959436 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:37.959465 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:37.959485 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.959511 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.959523 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:37.959544 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\nI0517 00:48:37.976358 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.976381 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.976400 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:37.977365 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:37.977397 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:37.977415 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:37.977443 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.977463 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.977487 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:37.981040 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.981057 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.981080 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:37.982111 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.982134 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:37.982156 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:37.982166 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:37.982180 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.982193 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:37.991046 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.991068 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.991087 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:37.991913 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.991953 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.991972 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:37.991994 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:37.992011 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:37.992027 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:37.996399 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.996420 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.996438 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:37.997469 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:37.997488 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:37.997497 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:37.997512 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:37.997523 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:37.997531 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.001039 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.001059 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.001075 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.001722 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.001739 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.001757 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.001789 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.001805 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.001828 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.005039 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.005064 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.005103 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.006246 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.006283 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.006322 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.006411 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.006426 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.006445 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.009777 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.009807 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.009849 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.009990 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.010005 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.010014 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.010085 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.010103 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.010113 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.016648 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.016670 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.016691 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.017826 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.017870 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.017895 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.017929 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.017955 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.017974 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.021818 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.021845 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.021866 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.022155 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.022178 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.022222 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.022242 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.022259 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.022283 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.025718 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.025763 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.025805 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.026246 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.026272 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.026308 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.026324 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0517 00:48:38.026345 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.026372 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.026393 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.026417 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.026436 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n http://10.100.153.3:80/\nI0517 00:48:38.029954 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.029985 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.030008 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.030510 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.030534 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.030549 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.030582 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.030595 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.030606 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\nI0517 00:48:38.030617 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.030632 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\n+ echo\nI0517 00:48:38.030669 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\nI0517 00:48:38.030896 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.030921 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.030945 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.034994 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.035013 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.035025 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.035468 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.035486 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.035499 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.035510 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.035521 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.035537 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.039200 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.039231 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.039263 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.039730 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.039755 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.039777 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.039786 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\nI0517 00:48:38.039794 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.039800 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.039820 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\nI0517 00:48:38.039833 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.039841 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.044239 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.044267 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.044280 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.044799 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.044826 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.044847 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.044866 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.044877 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.044895 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.048752 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.048773 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.048802 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.049386 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.049419 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.049450 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\nI0517 00:48:38.049474 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.049499 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.049523 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.049578 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.049594 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.049608 4020 log.go:172] (0xc00015cfa0) (5) Data frame sent\nI0517 00:48:38.053526 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.053547 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.053564 4020 log.go:172] (0xc0005ba780) (3) Data frame sent\nI0517 00:48:38.054206 4020 log.go:172] (0xc000bb4d10) Data frame received for 5\nI0517 00:48:38.054226 4020 log.go:172] (0xc00015cfa0) (5) Data frame handling\nI0517 00:48:38.054329 4020 log.go:172] (0xc000bb4d10) Data frame received for 3\nI0517 00:48:38.054348 4020 log.go:172] (0xc0005ba780) (3) Data frame handling\nI0517 00:48:38.056197 4020 log.go:172] (0xc000bb4d10) Data frame received for 1\nI0517 00:48:38.056237 4020 log.go:172] (0xc00025b2c0) (1) Data frame handling\nI0517 00:48:38.056252 4020 log.go:172] (0xc00025b2c0) (1) Data frame sent\nI0517 00:48:38.056293 4020 log.go:172] (0xc000bb4d10) (0xc00025b2c0) Stream removed, broadcasting: 1\nI0517 00:48:38.056372 4020 log.go:172] (0xc000bb4d10) Go away received\nI0517 00:48:38.057914 4020 log.go:172] (0xc000bb4d10) (0xc00025b2c0) Stream removed, broadcasting: 1\nI0517 00:48:38.057951 4020 log.go:172] (0xc000bb4d10) (0xc0005ba780) Stream removed, broadcasting: 3\nI0517 00:48:38.057972 4020 log.go:172] (0xc000bb4d10) (0xc00015cfa0) Stream removed, broadcasting: 5\n" May 17 00:48:38.062: INFO: stdout: "\naffinity-clusterip-transition-tvt5c\naffinity-clusterip-transition-tvt5c\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-tvt5c\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-tvt5c\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-qvgrq\naffinity-clusterip-transition-qvgrq\naffinity-clusterip-transition-tvt5c\naffinity-clusterip-transition-qvgrq\naffinity-clusterip-transition-tvt5c\naffinity-clusterip-transition-tvt5c\naffinity-clusterip-transition-qvgrq\naffinity-clusterip-transition-tvt5c" May 17 00:48:38.062: INFO: Received response from host: May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-tvt5c May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-tvt5c May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-tvt5c May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-tvt5c May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-qvgrq May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-qvgrq May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-tvt5c May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-qvgrq May 17 00:48:38.062: INFO: Received response from host: affinity-clusterip-transition-tvt5c May 17 00:48:38.063: INFO: Received response from host: affinity-clusterip-transition-tvt5c May 17 00:48:38.063: INFO: Received response from host: affinity-clusterip-transition-qvgrq May 17 00:48:38.063: INFO: Received response from host: affinity-clusterip-transition-tvt5c May 17 00:48:38.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1388 execpod-affinityr89k5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.153.3:80/ ; done' May 17 00:48:38.406: INFO: stderr: "I0517 00:48:38.247147 4041 log.go:172] (0xc0007c2fd0) (0xc000830e60) Create stream\nI0517 00:48:38.247204 4041 log.go:172] (0xc0007c2fd0) (0xc000830e60) Stream added, broadcasting: 1\nI0517 00:48:38.252346 4041 log.go:172] (0xc0007c2fd0) Reply frame received for 1\nI0517 00:48:38.252397 4041 log.go:172] (0xc0007c2fd0) (0xc00082b4a0) Create stream\nI0517 00:48:38.252411 4041 log.go:172] (0xc0007c2fd0) (0xc00082b4a0) Stream added, broadcasting: 3\nI0517 00:48:38.253750 4041 log.go:172] (0xc0007c2fd0) Reply frame received for 3\nI0517 00:48:38.253805 4041 log.go:172] (0xc0007c2fd0) (0xc00081ea00) Create stream\nI0517 00:48:38.253826 4041 log.go:172] (0xc0007c2fd0) (0xc00081ea00) Stream added, broadcasting: 5\nI0517 00:48:38.254936 4041 log.go:172] (0xc0007c2fd0) Reply frame received for 5\nI0517 00:48:38.312484 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.312518 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.312552 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.312584 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.312600 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.312624 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\nI0517 00:48:38.318656 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.318683 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.318710 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.318997 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.319018 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.319030 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\nI0517 00:48:38.319037 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.319042 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.319054 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\nI0517 00:48:38.319163 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.319185 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.319204 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.322530 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.322544 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.322563 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.323040 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.323064 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.323076 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.323090 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.323099 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.323110 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.326648 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.326671 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.326681 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.327146 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.327165 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.327182 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.327245 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.327263 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.327279 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.334637 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.334660 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.334682 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.335633 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.335651 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.335659 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.335683 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.335709 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.335729 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.342198 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.342221 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.342235 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.342699 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.342716 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.342732 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.342756 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.342770 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.342780 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\nI0517 00:48:38.342790 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.342799 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.342814 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\nI0517 00:48:38.347975 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.347996 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.348010 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.348527 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.348571 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.348587 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.348606 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.348614 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.348625 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.352716 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.352743 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.352773 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.353085 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.353103 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.353237 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.353255 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.353261 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.353277 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.359194 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.359217 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.359239 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.359695 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.359712 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.359718 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.359736 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.359767 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.359792 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.363240 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.363257 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.363266 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.363675 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.363765 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.363825 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.363854 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.363875 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.363892 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.368443 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.368470 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.368499 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.368706 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.368726 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.368733 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.368742 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.368747 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.368752 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.373302 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.373323 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.373339 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.373637 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.373656 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.373663 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.373673 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.373678 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.373684 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.377624 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.377664 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.377697 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.377965 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.377998 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.378026 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.378087 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.378106 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.378122 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.383510 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.383532 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.383560 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.384195 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.384221 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.384234 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.384258 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.384273 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.384303 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.389331 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.389368 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.389402 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.389638 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.389653 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.389659 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.389682 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.389705 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.389727 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.394213 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.394236 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.394253 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.395160 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.395198 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.395217 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.395241 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.395259 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.395276 4041 log.go:172] (0xc00081ea00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.153.3:80/\nI0517 00:48:38.398657 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.398683 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.398701 4041 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0517 00:48:38.399049 4041 log.go:172] (0xc0007c2fd0) Data frame received for 3\nI0517 00:48:38.399099 4041 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0517 00:48:38.399171 4041 log.go:172] (0xc0007c2fd0) Data frame received for 5\nI0517 00:48:38.399181 4041 log.go:172] (0xc00081ea00) (5) Data frame handling\nI0517 00:48:38.400751 4041 log.go:172] (0xc0007c2fd0) Data frame received for 1\nI0517 00:48:38.400770 4041 log.go:172] (0xc000830e60) (1) Data frame handling\nI0517 00:48:38.400784 4041 log.go:172] (0xc000830e60) (1) Data frame sent\nI0517 00:48:38.400801 4041 log.go:172] (0xc0007c2fd0) (0xc000830e60) Stream removed, broadcasting: 1\nI0517 00:48:38.400829 4041 log.go:172] (0xc0007c2fd0) Go away received\nI0517 00:48:38.401293 4041 log.go:172] (0xc0007c2fd0) (0xc000830e60) Stream removed, broadcasting: 1\nI0517 00:48:38.401308 4041 log.go:172] (0xc0007c2fd0) (0xc00082b4a0) Stream removed, broadcasting: 3\nI0517 00:48:38.401315 4041 log.go:172] (0xc0007c2fd0) (0xc00081ea00) Stream removed, broadcasting: 5\n" May 17 00:48:38.406: INFO: stdout: "\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g\naffinity-clusterip-transition-nv28g" May 17 00:48:38.407: INFO: Received response from host: May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Received response from host: affinity-clusterip-transition-nv28g May 17 00:48:38.407: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1388, will wait for the garbage collector to delete the pods May 17 00:48:38.515: INFO: Deleting ReplicationController affinity-clusterip-transition took: 10.184065ms May 17 00:48:38.916: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.332216ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:48:55.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1388" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:29.305 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":195,"skipped":3308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:48:55.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:49:01.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9072" for this suite. STEP: Destroying namespace "nsdeletetest-7550" for this suite. May 17 00:49:01.708: INFO: Namespace nsdeletetest-7550 was already deleted STEP: Destroying namespace "nsdeletetest-9833" for this suite. • [SLOW TEST:6.325 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":196,"skipped":3343,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:49:01.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9638/configmap-test-66d00b06-8173-4304-ab68-1b806d0d7709 STEP: Creating a pod to test consume configMaps May 17 00:49:01.829: INFO: Waiting up to 5m0s for pod "pod-configmaps-e2d17068-14cb-4d96-b48b-299f4daeac02" in namespace "configmap-9638" to be "Succeeded or Failed" May 17 00:49:01.839: INFO: Pod "pod-configmaps-e2d17068-14cb-4d96-b48b-299f4daeac02": Phase="Pending", Reason="", readiness=false. Elapsed: 10.198898ms May 17 00:49:03.844: INFO: Pod "pod-configmaps-e2d17068-14cb-4d96-b48b-299f4daeac02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014618217s May 17 00:49:05.848: INFO: Pod "pod-configmaps-e2d17068-14cb-4d96-b48b-299f4daeac02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019399617s STEP: Saw pod success May 17 00:49:05.848: INFO: Pod "pod-configmaps-e2d17068-14cb-4d96-b48b-299f4daeac02" satisfied condition "Succeeded or Failed" May 17 00:49:05.851: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e2d17068-14cb-4d96-b48b-299f4daeac02 container env-test: STEP: delete the pod May 17 00:49:05.896: INFO: Waiting for pod pod-configmaps-e2d17068-14cb-4d96-b48b-299f4daeac02 to disappear May 17 00:49:05.929: INFO: Pod pod-configmaps-e2d17068-14cb-4d96-b48b-299f4daeac02 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:49:05.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9638" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3358,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:49:05.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 17 00:49:06.048: INFO: Waiting up to 5m0s for pod "pod-2b1a97b5-18b9-4347-a002-947cadcb7403" in namespace "emptydir-9060" to be "Succeeded or Failed" May 17 00:49:06.076: INFO: Pod "pod-2b1a97b5-18b9-4347-a002-947cadcb7403": Phase="Pending", Reason="", readiness=false. Elapsed: 27.608866ms May 17 00:49:08.098: INFO: Pod "pod-2b1a97b5-18b9-4347-a002-947cadcb7403": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050008992s May 17 00:49:10.102: INFO: Pod "pod-2b1a97b5-18b9-4347-a002-947cadcb7403": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054351819s STEP: Saw pod success May 17 00:49:10.102: INFO: Pod "pod-2b1a97b5-18b9-4347-a002-947cadcb7403" satisfied condition "Succeeded or Failed" May 17 00:49:10.106: INFO: Trying to get logs from node latest-worker2 pod pod-2b1a97b5-18b9-4347-a002-947cadcb7403 container test-container: STEP: delete the pod May 17 00:49:10.289: INFO: Waiting for pod pod-2b1a97b5-18b9-4347-a002-947cadcb7403 to disappear May 17 00:49:10.335: INFO: Pod pod-2b1a97b5-18b9-4347-a002-947cadcb7403 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:49:10.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9060" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":198,"skipped":3375,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:49:10.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 17 00:49:10.403: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix481050653/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:49:10.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2589" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":199,"skipped":3376,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:49:10.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 17 00:49:18.738: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 00:49:18.801: INFO: Pod pod-with-poststart-exec-hook still exists May 17 00:49:20.801: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 00:49:20.806: INFO: Pod pod-with-poststart-exec-hook still exists May 17 00:49:22.801: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 00:49:22.805: INFO: Pod pod-with-poststart-exec-hook still exists May 17 00:49:24.801: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 00:49:24.806: INFO: Pod pod-with-poststart-exec-hook still exists May 17 00:49:26.801: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 00:49:26.805: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:49:26.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3883" for this suite. • [SLOW TEST:16.331 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3376,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:49:26.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:49:27.468: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:49:29.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273367, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273367, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273367, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273367, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 00:49:31.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273367, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273367, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273367, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273367, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:49:34.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 17 00:49:38.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-3479 to-be-attached-pod -i -c=container1' May 17 00:49:38.711: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:49:38.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3479" for this suite. STEP: Destroying namespace "webhook-3479-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.016 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":201,"skipped":3379,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:49:38.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-372 STEP: creating a selector STEP: Creating the service pods in kubernetes May 17 00:49:38.883: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 17 00:49:38.959: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 17 00:49:41.100: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 17 00:49:42.963: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:49:44.986: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:49:46.963: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:49:48.968: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:49:50.963: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:49:52.963: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:49:54.963: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:49:56.963: INFO: The status of Pod netserver-0 is Running (Ready = false) May 17 00:49:58.986: INFO: The status of Pod netserver-0 is Running (Ready = true) May 17 00:49:58.992: INFO: The status of Pod netserver-1 is Running (Ready = false) May 17 00:50:00.997: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 17 00:50:07.077: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.210:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-372 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:50:07.077: INFO: >>> kubeConfig: /root/.kube/config I0517 00:50:07.109879 7 log.go:172] (0xc0024f26e0) (0xc001575d60) Create stream I0517 00:50:07.109920 7 log.go:172] (0xc0024f26e0) (0xc001575d60) Stream added, broadcasting: 1 I0517 00:50:07.112025 7 log.go:172] (0xc0024f26e0) Reply frame received for 1 I0517 00:50:07.112082 7 log.go:172] (0xc0024f26e0) (0xc001be05a0) Create stream I0517 00:50:07.112108 7 log.go:172] (0xc0024f26e0) (0xc001be05a0) Stream added, broadcasting: 3 I0517 00:50:07.113326 7 log.go:172] (0xc0024f26e0) Reply frame received for 3 I0517 00:50:07.113353 7 log.go:172] (0xc0024f26e0) (0xc001be0640) Create stream I0517 00:50:07.113366 7 log.go:172] (0xc0024f26e0) (0xc001be0640) Stream added, broadcasting: 5 I0517 00:50:07.114371 7 log.go:172] (0xc0024f26e0) Reply frame received for 5 I0517 00:50:07.197577 7 log.go:172] (0xc0024f26e0) Data frame received for 3 I0517 00:50:07.197608 7 log.go:172] (0xc001be05a0) (3) Data frame handling I0517 00:50:07.197621 7 log.go:172] (0xc001be05a0) (3) Data frame sent I0517 00:50:07.197634 7 log.go:172] (0xc0024f26e0) Data frame received for 3 I0517 00:50:07.197649 7 log.go:172] (0xc001be05a0) (3) Data frame handling I0517 00:50:07.197731 7 log.go:172] (0xc0024f26e0) Data frame received for 5 I0517 00:50:07.197777 7 log.go:172] (0xc001be0640) (5) Data frame handling I0517 00:50:07.199585 7 log.go:172] (0xc0024f26e0) Data frame received for 1 I0517 00:50:07.199628 7 log.go:172] (0xc001575d60) (1) Data frame handling I0517 00:50:07.199686 7 log.go:172] (0xc001575d60) (1) Data frame sent I0517 00:50:07.199710 7 log.go:172] (0xc0024f26e0) (0xc001575d60) Stream removed, broadcasting: 1 I0517 00:50:07.199738 7 log.go:172] (0xc0024f26e0) Go away received I0517 00:50:07.199880 7 log.go:172] (0xc0024f26e0) (0xc001575d60) Stream removed, broadcasting: 1 I0517 00:50:07.199917 7 log.go:172] (0xc0024f26e0) (0xc001be05a0) Stream removed, broadcasting: 3 I0517 00:50:07.199945 7 log.go:172] (0xc0024f26e0) (0xc001be0640) Stream removed, broadcasting: 5 May 17 00:50:07.199: INFO: Found all expected endpoints: [netserver-0] May 17 00:50:07.204: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-372 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 00:50:07.204: INFO: >>> kubeConfig: /root/.kube/config I0517 00:50:07.234539 7 log.go:172] (0xc0024f2d10) (0xc0012a8780) Create stream I0517 00:50:07.234570 7 log.go:172] (0xc0024f2d10) (0xc0012a8780) Stream added, broadcasting: 1 I0517 00:50:07.236014 7 log.go:172] (0xc0024f2d10) Reply frame received for 1 I0517 00:50:07.236065 7 log.go:172] (0xc0024f2d10) (0xc0017f2140) Create stream I0517 00:50:07.236078 7 log.go:172] (0xc0024f2d10) (0xc0017f2140) Stream added, broadcasting: 3 I0517 00:50:07.236886 7 log.go:172] (0xc0024f2d10) Reply frame received for 3 I0517 00:50:07.236918 7 log.go:172] (0xc0024f2d10) (0xc001be06e0) Create stream I0517 00:50:07.236929 7 log.go:172] (0xc0024f2d10) (0xc001be06e0) Stream added, broadcasting: 5 I0517 00:50:07.238063 7 log.go:172] (0xc0024f2d10) Reply frame received for 5 I0517 00:50:07.318092 7 log.go:172] (0xc0024f2d10) Data frame received for 5 I0517 00:50:07.318145 7 log.go:172] (0xc001be06e0) (5) Data frame handling I0517 00:50:07.318175 7 log.go:172] (0xc0024f2d10) Data frame received for 3 I0517 00:50:07.318193 7 log.go:172] (0xc0017f2140) (3) Data frame handling I0517 00:50:07.318203 7 log.go:172] (0xc0017f2140) (3) Data frame sent I0517 00:50:07.318220 7 log.go:172] (0xc0024f2d10) Data frame received for 3 I0517 00:50:07.318232 7 log.go:172] (0xc0017f2140) (3) Data frame handling I0517 00:50:07.319718 7 log.go:172] (0xc0024f2d10) Data frame received for 1 I0517 00:50:07.319742 7 log.go:172] (0xc0012a8780) (1) Data frame handling I0517 00:50:07.319753 7 log.go:172] (0xc0012a8780) (1) Data frame sent I0517 00:50:07.319771 7 log.go:172] (0xc0024f2d10) (0xc0012a8780) Stream removed, broadcasting: 1 I0517 00:50:07.319794 7 log.go:172] (0xc0024f2d10) Go away received I0517 00:50:07.319927 7 log.go:172] (0xc0024f2d10) (0xc0012a8780) Stream removed, broadcasting: 1 I0517 00:50:07.319958 7 log.go:172] (0xc0024f2d10) (0xc0017f2140) Stream removed, broadcasting: 3 I0517 00:50:07.319980 7 log.go:172] (0xc0024f2d10) (0xc001be06e0) Stream removed, broadcasting: 5 May 17 00:50:07.320: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:50:07.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-372" for this suite. • [SLOW TEST:28.497 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3382,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:50:07.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:50:07.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7628" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":203,"skipped":3390,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:50:07.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-803a1a71-0682-4dcc-b980-9ad566f5e6f5 STEP: Creating a pod to test consume configMaps May 17 00:50:07.542: INFO: Waiting up to 5m0s for pod "pod-configmaps-0816b248-8065-4631-a5d4-552ca9e03f21" in namespace "configmap-583" to be "Succeeded or Failed" May 17 00:50:07.564: INFO: Pod "pod-configmaps-0816b248-8065-4631-a5d4-552ca9e03f21": Phase="Pending", Reason="", readiness=false. Elapsed: 21.311381ms May 17 00:50:09.568: INFO: Pod "pod-configmaps-0816b248-8065-4631-a5d4-552ca9e03f21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025446842s May 17 00:50:11.572: INFO: Pod "pod-configmaps-0816b248-8065-4631-a5d4-552ca9e03f21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029730347s STEP: Saw pod success May 17 00:50:11.572: INFO: Pod "pod-configmaps-0816b248-8065-4631-a5d4-552ca9e03f21" satisfied condition "Succeeded or Failed" May 17 00:50:11.576: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0816b248-8065-4631-a5d4-552ca9e03f21 container configmap-volume-test: STEP: delete the pod May 17 00:50:11.627: INFO: Waiting for pod pod-configmaps-0816b248-8065-4631-a5d4-552ca9e03f21 to disappear May 17 00:50:11.633: INFO: Pod pod-configmaps-0816b248-8065-4631-a5d4-552ca9e03f21 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:50:11.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-583" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3400,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:50:11.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:50:28.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7775" for this suite. • [SLOW TEST:17.171 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":205,"skipped":3401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:50:28.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:50:28.911: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:50:32.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1640" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":206,"skipped":3434,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:50:32.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:50:49.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3898" for this suite. • [SLOW TEST:16.068 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":207,"skipped":3454,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:50:49.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-jk5x STEP: Creating a pod to test atomic-volume-subpath May 17 00:50:49.187: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jk5x" in namespace "subpath-2488" to be "Succeeded or Failed" May 17 00:50:49.251: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Pending", Reason="", readiness=false. Elapsed: 63.873726ms May 17 00:50:51.255: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068453244s May 17 00:50:53.260: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 4.072980094s May 17 00:50:55.264: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 6.076698611s May 17 00:50:57.267: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 8.080180471s May 17 00:50:59.271: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 10.084359967s May 17 00:51:01.276: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 12.088708835s May 17 00:51:03.281: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 14.093592348s May 17 00:51:05.286: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 16.098824849s May 17 00:51:07.290: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 18.103541219s May 17 00:51:09.295: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 20.108194328s May 17 00:51:11.300: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 22.113073004s May 17 00:51:13.304: INFO: Pod "pod-subpath-test-secret-jk5x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117070344s STEP: Saw pod success May 17 00:51:13.304: INFO: Pod "pod-subpath-test-secret-jk5x" satisfied condition "Succeeded or Failed" May 17 00:51:13.307: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-jk5x container test-container-subpath-secret-jk5x: STEP: delete the pod May 17 00:51:13.398: INFO: Waiting for pod pod-subpath-test-secret-jk5x to disappear May 17 00:51:13.406: INFO: Pod pod-subpath-test-secret-jk5x no longer exists STEP: Deleting pod pod-subpath-test-secret-jk5x May 17 00:51:13.406: INFO: Deleting pod "pod-subpath-test-secret-jk5x" in namespace "subpath-2488" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:51:13.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2488" for this suite. • [SLOW TEST:24.367 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":208,"skipped":3468,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:51:13.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-b49d9a28-bbbc-4e5e-ac5a-ff67dbbfcb19 STEP: Creating a pod to test consume secrets May 17 00:51:13.486: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878" in namespace "projected-6645" to be "Succeeded or Failed" May 17 00:51:13.538: INFO: Pod "pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878": Phase="Pending", Reason="", readiness=false. Elapsed: 52.545513ms May 17 00:51:15.542: INFO: Pod "pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05668383s May 17 00:51:17.547: INFO: Pod "pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878": Phase="Running", Reason="", readiness=true. Elapsed: 4.06107129s May 17 00:51:19.551: INFO: Pod "pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06581951s STEP: Saw pod success May 17 00:51:19.551: INFO: Pod "pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878" satisfied condition "Succeeded or Failed" May 17 00:51:19.555: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878 container secret-volume-test: STEP: delete the pod May 17 00:51:19.583: INFO: Waiting for pod pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878 to disappear May 17 00:51:19.592: INFO: Pod pod-projected-secrets-31a949f8-5438-40dd-9954-d3f864de6878 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:51:19.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6645" for this suite. • [SLOW TEST:6.185 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":209,"skipped":3474,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:51:19.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-f618b344-03a5-49e3-b593-41fd3eb85cd3 STEP: Creating a pod to test consume configMaps May 17 00:51:19.715: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b8b3d4d-3be8-4162-aa7e-46257149db3a" in namespace "configmap-2746" to be "Succeeded or Failed" May 17 00:51:19.724: INFO: Pod "pod-configmaps-3b8b3d4d-3be8-4162-aa7e-46257149db3a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.021724ms May 17 00:51:21.891: INFO: Pod "pod-configmaps-3b8b3d4d-3be8-4162-aa7e-46257149db3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176410488s May 17 00:51:23.896: INFO: Pod "pod-configmaps-3b8b3d4d-3be8-4162-aa7e-46257149db3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180996177s STEP: Saw pod success May 17 00:51:23.896: INFO: Pod "pod-configmaps-3b8b3d4d-3be8-4162-aa7e-46257149db3a" satisfied condition "Succeeded or Failed" May 17 00:51:23.899: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3b8b3d4d-3be8-4162-aa7e-46257149db3a container configmap-volume-test: STEP: delete the pod May 17 00:51:23.934: INFO: Waiting for pod pod-configmaps-3b8b3d4d-3be8-4162-aa7e-46257149db3a to disappear May 17 00:51:23.946: INFO: Pod pod-configmaps-3b8b3d4d-3be8-4162-aa7e-46257149db3a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:51:23.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2746" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:51:23.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 17 00:51:24.062: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:51:32.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4999" for this suite. • [SLOW TEST:8.324 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":211,"skipped":3542,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:51:32.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:51:32.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5375" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":212,"skipped":3548,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:51:32.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:51:32.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5002" for this suite. STEP: Destroying namespace "nspatchtest-31d29899-ccad-4882-a975-f2b92a42a3d2-8933" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":213,"skipped":3556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:51:32.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 17 00:51:32.746: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4884 /api/v1/namespaces/watch-4884/configmaps/e2e-watch-test-watch-closed 7beeced7-9361-46cf-82ee-180ed8cb18a9 5296922 0 2020-05-17 00:51:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-17 00:51:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 17 00:51:32.746: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4884 /api/v1/namespaces/watch-4884/configmaps/e2e-watch-test-watch-closed 7beeced7-9361-46cf-82ee-180ed8cb18a9 5296923 0 2020-05-17 00:51:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-17 00:51:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 17 00:51:32.756: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4884 /api/v1/namespaces/watch-4884/configmaps/e2e-watch-test-watch-closed 7beeced7-9361-46cf-82ee-180ed8cb18a9 5296924 0 2020-05-17 00:51:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-17 00:51:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 17 00:51:32.756: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4884 /api/v1/namespaces/watch-4884/configmaps/e2e-watch-test-watch-closed 7beeced7-9361-46cf-82ee-180ed8cb18a9 5296925 0 2020-05-17 00:51:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-17 00:51:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:51:32.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4884" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":214,"skipped":3584,"failed":0} SSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:51:32.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-665 STEP: creating service affinity-clusterip in namespace services-665 STEP: creating replication controller affinity-clusterip in namespace services-665 I0517 00:51:32.936386 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-665, replica count: 3 I0517 00:51:35.987280 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 00:51:38.987509 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 00:51:38.995: INFO: Creating new exec pod May 17 00:51:44.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-665 execpod-affinitydcsqv -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 17 00:51:44.272: INFO: stderr: "I0517 00:51:44.141877 4099 log.go:172] (0xc000b26d10) (0xc0003faf00) Create stream\nI0517 00:51:44.141944 4099 log.go:172] (0xc000b26d10) (0xc0003faf00) Stream added, broadcasting: 1\nI0517 00:51:44.144886 4099 log.go:172] (0xc000b26d10) Reply frame received for 1\nI0517 00:51:44.144921 4099 log.go:172] (0xc000b26d10) (0xc00059e460) Create stream\nI0517 00:51:44.144934 4099 log.go:172] (0xc000b26d10) (0xc00059e460) Stream added, broadcasting: 3\nI0517 00:51:44.146281 4099 log.go:172] (0xc000b26d10) Reply frame received for 3\nI0517 00:51:44.146332 4099 log.go:172] (0xc000b26d10) (0xc00051a320) Create stream\nI0517 00:51:44.146358 4099 log.go:172] (0xc000b26d10) (0xc00051a320) Stream added, broadcasting: 5\nI0517 00:51:44.147270 4099 log.go:172] (0xc000b26d10) Reply frame received for 5\nI0517 00:51:44.244989 4099 log.go:172] (0xc000b26d10) Data frame received for 5\nI0517 00:51:44.245038 4099 log.go:172] (0xc00051a320) (5) Data frame handling\nI0517 00:51:44.245067 4099 log.go:172] (0xc00051a320) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0517 00:51:44.263635 4099 log.go:172] (0xc000b26d10) Data frame received for 5\nI0517 00:51:44.263659 4099 log.go:172] (0xc00051a320) (5) Data frame handling\nI0517 00:51:44.263686 4099 log.go:172] (0xc00051a320) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0517 00:51:44.264150 4099 log.go:172] (0xc000b26d10) Data frame received for 3\nI0517 00:51:44.264172 4099 log.go:172] (0xc00059e460) (3) Data frame handling\nI0517 00:51:44.264214 4099 log.go:172] (0xc000b26d10) Data frame received for 5\nI0517 00:51:44.264233 4099 log.go:172] (0xc00051a320) (5) Data frame handling\nI0517 00:51:44.266920 4099 log.go:172] (0xc000b26d10) Data frame received for 1\nI0517 00:51:44.266936 4099 log.go:172] (0xc0003faf00) (1) Data frame handling\nI0517 00:51:44.266949 4099 log.go:172] (0xc0003faf00) (1) Data frame sent\nI0517 00:51:44.267053 4099 log.go:172] (0xc000b26d10) (0xc0003faf00) Stream removed, broadcasting: 1\nI0517 00:51:44.267240 4099 log.go:172] (0xc000b26d10) Go away received\nI0517 00:51:44.267425 4099 log.go:172] (0xc000b26d10) (0xc0003faf00) Stream removed, broadcasting: 1\nI0517 00:51:44.267439 4099 log.go:172] (0xc000b26d10) (0xc00059e460) Stream removed, broadcasting: 3\nI0517 00:51:44.267450 4099 log.go:172] (0xc000b26d10) (0xc00051a320) Stream removed, broadcasting: 5\n" May 17 00:51:44.273: INFO: stdout: "" May 17 00:51:44.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-665 execpod-affinitydcsqv -- /bin/sh -x -c nc -zv -t -w 2 10.101.252.152 80' May 17 00:51:44.480: INFO: stderr: "I0517 00:51:44.407860 4119 log.go:172] (0xc000a9f1e0) (0xc00083dcc0) Create stream\nI0517 00:51:44.407922 4119 log.go:172] (0xc000a9f1e0) (0xc00083dcc0) Stream added, broadcasting: 1\nI0517 00:51:44.411887 4119 log.go:172] (0xc000a9f1e0) Reply frame received for 1\nI0517 00:51:44.412015 4119 log.go:172] (0xc000a9f1e0) (0xc0008446e0) Create stream\nI0517 00:51:44.412047 4119 log.go:172] (0xc000a9f1e0) (0xc0008446e0) Stream added, broadcasting: 3\nI0517 00:51:44.414025 4119 log.go:172] (0xc000a9f1e0) Reply frame received for 3\nI0517 00:51:44.414076 4119 log.go:172] (0xc000a9f1e0) (0xc00084c000) Create stream\nI0517 00:51:44.414108 4119 log.go:172] (0xc000a9f1e0) (0xc00084c000) Stream added, broadcasting: 5\nI0517 00:51:44.415247 4119 log.go:172] (0xc000a9f1e0) Reply frame received for 5\nI0517 00:51:44.474729 4119 log.go:172] (0xc000a9f1e0) Data frame received for 3\nI0517 00:51:44.474773 4119 log.go:172] (0xc0008446e0) (3) Data frame handling\nI0517 00:51:44.474796 4119 log.go:172] (0xc000a9f1e0) Data frame received for 5\nI0517 00:51:44.474806 4119 log.go:172] (0xc00084c000) (5) Data frame handling\nI0517 00:51:44.474821 4119 log.go:172] (0xc00084c000) (5) Data frame sent\nI0517 00:51:44.474835 4119 log.go:172] (0xc000a9f1e0) Data frame received for 5\nI0517 00:51:44.474843 4119 log.go:172] (0xc00084c000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.252.152 80\nConnection to 10.101.252.152 80 port [tcp/http] succeeded!\nI0517 00:51:44.476024 4119 log.go:172] (0xc000a9f1e0) Data frame received for 1\nI0517 00:51:44.476040 4119 log.go:172] (0xc00083dcc0) (1) Data frame handling\nI0517 00:51:44.476053 4119 log.go:172] (0xc00083dcc0) (1) Data frame sent\nI0517 00:51:44.476061 4119 log.go:172] (0xc000a9f1e0) (0xc00083dcc0) Stream removed, broadcasting: 1\nI0517 00:51:44.476153 4119 log.go:172] (0xc000a9f1e0) Go away received\nI0517 00:51:44.476289 4119 log.go:172] (0xc000a9f1e0) (0xc00083dcc0) Stream removed, broadcasting: 1\nI0517 00:51:44.476300 4119 log.go:172] (0xc000a9f1e0) (0xc0008446e0) Stream removed, broadcasting: 3\nI0517 00:51:44.476307 4119 log.go:172] (0xc000a9f1e0) (0xc00084c000) Stream removed, broadcasting: 5\n" May 17 00:51:44.480: INFO: stdout: "" May 17 00:51:44.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-665 execpod-affinitydcsqv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.101.252.152:80/ ; done' May 17 00:51:44.785: INFO: stderr: "I0517 00:51:44.606219 4139 log.go:172] (0xc000be0fd0) (0xc0004c3180) Create stream\nI0517 00:51:44.606426 4139 log.go:172] (0xc000be0fd0) (0xc0004c3180) Stream added, broadcasting: 1\nI0517 00:51:44.609775 4139 log.go:172] (0xc000be0fd0) Reply frame received for 1\nI0517 00:51:44.609806 4139 log.go:172] (0xc000be0fd0) (0xc0003c0280) Create stream\nI0517 00:51:44.609814 4139 log.go:172] (0xc000be0fd0) (0xc0003c0280) Stream added, broadcasting: 3\nI0517 00:51:44.610927 4139 log.go:172] (0xc000be0fd0) Reply frame received for 3\nI0517 00:51:44.610963 4139 log.go:172] (0xc000be0fd0) (0xc0004c3220) Create stream\nI0517 00:51:44.610983 4139 log.go:172] (0xc000be0fd0) (0xc0004c3220) Stream added, broadcasting: 5\nI0517 00:51:44.612121 4139 log.go:172] (0xc000be0fd0) Reply frame received for 5\nI0517 00:51:44.669604 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.669661 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.669679 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.669705 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.669717 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.669735 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.677063 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.677088 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.677106 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.678357 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.678393 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.678407 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\nI0517 00:51:44.678418 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/I0517 00:51:44.678433 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.678451 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n\nI0517 00:51:44.678492 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.678518 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.678538 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.686382 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.686415 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.686441 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.686724 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.686747 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.686758 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.686774 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.686791 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.686813 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.691090 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.691113 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.691134 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.691685 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.691711 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.691726 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.691749 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.691766 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.691776 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.695224 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.695241 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.695260 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.695690 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.695737 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.695776 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.695802 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.695823 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.695836 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.702703 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.702724 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.702745 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.703488 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.703510 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.703539 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.703566 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.703577 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.703594 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.709805 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.709828 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.709847 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.710402 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.710433 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.710452 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.710478 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.710494 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.710512 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.715469 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.715484 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.715492 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.715879 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.715915 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.715929 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.715947 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.715957 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.715969 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.720030 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.720050 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.720065 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.720574 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.720602 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.720615 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.720633 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.720642 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.720657 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.724725 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.724746 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.724761 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.725305 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.725492 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.725510 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\nI0517 00:51:44.725521 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.725531 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.725552 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\nI0517 00:51:44.725563 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.725571 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.725581 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.730013 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.730035 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.730064 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.730526 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.730547 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.730559 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.730580 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.730598 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.730617 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -qI0517 00:51:44.730633 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.730647 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.730665 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.734899 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.734915 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.734927 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.735707 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.735737 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.735760 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.735796 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.735813 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.735835 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.742733 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.742760 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.742788 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.743515 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.743551 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.743582 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.743632 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.743658 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.743693 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.747621 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.747649 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.747668 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.748127 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.748144 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.748153 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.748170 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.748189 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.748212 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.752789 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.752811 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.752832 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.753862 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.753908 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.753924 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.753970 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.753989 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.754014 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.758624 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.758653 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.758675 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.759924 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.759944 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.759957 4139 log.go:172] (0xc0004c3220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.252.152:80/\nI0517 00:51:44.760017 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.760028 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.760044 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.764849 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.764882 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.764920 4139 log.go:172] (0xc0003c0280) (3) Data frame sent\nI0517 00:51:44.765518 4139 log.go:172] (0xc000be0fd0) Data frame received for 3\nI0517 00:51:44.765548 4139 log.go:172] (0xc0003c0280) (3) Data frame handling\nI0517 00:51:44.765743 4139 log.go:172] (0xc000be0fd0) Data frame received for 5\nI0517 00:51:44.765766 4139 log.go:172] (0xc0004c3220) (5) Data frame handling\nI0517 00:51:44.777689 4139 log.go:172] (0xc000be0fd0) Data frame received for 1\nI0517 00:51:44.777734 4139 log.go:172] (0xc0004c3180) (1) Data frame handling\nI0517 00:51:44.777763 4139 log.go:172] (0xc0004c3180) (1) Data frame sent\nI0517 00:51:44.777805 4139 log.go:172] (0xc000be0fd0) (0xc0004c3180) Stream removed, broadcasting: 1\nI0517 00:51:44.777830 4139 log.go:172] (0xc000be0fd0) Go away received\nI0517 00:51:44.778345 4139 log.go:172] (0xc000be0fd0) (0xc0004c3180) Stream removed, broadcasting: 1\nI0517 00:51:44.778370 4139 log.go:172] (0xc000be0fd0) (0xc0003c0280) Stream removed, broadcasting: 3\nI0517 00:51:44.778382 4139 log.go:172] (0xc000be0fd0) (0xc0004c3220) Stream removed, broadcasting: 5\n" May 17 00:51:44.786: INFO: stdout: "\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt\naffinity-clusterip-kdjjt" May 17 00:51:44.786: INFO: Received response from host: May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Received response from host: affinity-clusterip-kdjjt May 17 00:51:44.786: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-665, will wait for the garbage collector to delete the pods May 17 00:51:44.907: INFO: Deleting ReplicationController affinity-clusterip took: 6.630977ms May 17 00:51:45.308: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.268889ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:51:55.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-665" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:22.605 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":215,"skipped":3588,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:51:55.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 17 00:51:55.439: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:52:11.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6427" for this suite. • [SLOW TEST:16.034 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":216,"skipped":3596,"failed":0} S ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:52:11.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:52:11.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-8926" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":217,"skipped":3597,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:52:11.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 00:52:12.586: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 00:52:14.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273532, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273532, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273532, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273532, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:52:17.731: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:52:29.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4618" for this suite. STEP: Destroying namespace "webhook-4618-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.552 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":218,"skipped":3604,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:52:30.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:52:30.152: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 17 00:52:32.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5446 create -f -' May 17 00:52:35.812: INFO: stderr: "" May 17 00:52:35.812: INFO: stdout: "e2e-test-crd-publish-openapi-6889-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 17 00:52:35.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5446 delete e2e-test-crd-publish-openapi-6889-crds test-cr' May 17 00:52:35.917: INFO: stderr: "" May 17 00:52:35.917: INFO: stdout: "e2e-test-crd-publish-openapi-6889-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 17 00:52:35.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5446 apply -f -' May 17 00:52:36.165: INFO: stderr: "" May 17 00:52:36.165: INFO: stdout: "e2e-test-crd-publish-openapi-6889-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 17 00:52:36.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5446 delete e2e-test-crd-publish-openapi-6889-crds test-cr' May 17 00:52:36.270: INFO: stderr: "" May 17 00:52:36.270: INFO: stdout: "e2e-test-crd-publish-openapi-6889-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 17 00:52:36.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6889-crds' May 17 00:52:36.509: INFO: stderr: "" May 17 00:52:36.509: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6889-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:52:38.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5446" for this suite. • [SLOW TEST:8.328 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":219,"skipped":3623,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:52:38.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 17 00:52:38.527: INFO: Waiting up to 5m0s for pod "pod-2ef605c7-a0d0-44c9-bcff-d7879177504c" in namespace "emptydir-2762" to be "Succeeded or Failed" May 17 00:52:38.542: INFO: Pod "pod-2ef605c7-a0d0-44c9-bcff-d7879177504c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.559412ms May 17 00:52:40.545: INFO: Pod "pod-2ef605c7-a0d0-44c9-bcff-d7879177504c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017922287s May 17 00:52:42.550: INFO: Pod "pod-2ef605c7-a0d0-44c9-bcff-d7879177504c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022551674s STEP: Saw pod success May 17 00:52:42.550: INFO: Pod "pod-2ef605c7-a0d0-44c9-bcff-d7879177504c" satisfied condition "Succeeded or Failed" May 17 00:52:42.554: INFO: Trying to get logs from node latest-worker pod pod-2ef605c7-a0d0-44c9-bcff-d7879177504c container test-container: STEP: delete the pod May 17 00:52:42.594: INFO: Waiting for pod pod-2ef605c7-a0d0-44c9-bcff-d7879177504c to disappear May 17 00:52:42.602: INFO: Pod pod-2ef605c7-a0d0-44c9-bcff-d7879177504c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:52:42.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2762" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":220,"skipped":3632,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:52:42.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:52:42.705: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1" in namespace "downward-api-4136" to be "Succeeded or Failed" May 17 00:52:42.766: INFO: Pod "downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1": Phase="Pending", Reason="", readiness=false. Elapsed: 60.690223ms May 17 00:52:44.771: INFO: Pod "downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065416742s May 17 00:52:46.775: INFO: Pod "downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1": Phase="Running", Reason="", readiness=true. Elapsed: 4.069475086s May 17 00:52:48.780: INFO: Pod "downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074214433s STEP: Saw pod success May 17 00:52:48.780: INFO: Pod "downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1" satisfied condition "Succeeded or Failed" May 17 00:52:48.783: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1 container client-container: STEP: delete the pod May 17 00:52:48.830: INFO: Waiting for pod downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1 to disappear May 17 00:52:48.868: INFO: Pod downwardapi-volume-efba6701-bb06-455c-b3d9-1f6af049ecb1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:52:48.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4136" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3648,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:52:48.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:52:48.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-977" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":222,"skipped":3659,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:52:48.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:53:14.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5435" for this suite. • [SLOW TEST:25.245 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":223,"skipped":3662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:53:14.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:53:14.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 17 00:53:14.384: INFO: stderr: "" May 17 00:53:14.384: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:53:14.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-619" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":224,"skipped":3714,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:53:14.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:53:14.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-855" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3716,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:53:14.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:53:20.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3537" for this suite. • [SLOW TEST:5.606 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":226,"skipped":3717,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:53:20.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-c64e09e4-48a0-43ee-bf6b-382ae43309e0 STEP: Creating a pod to test consume configMaps May 17 00:53:20.318: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0b60d22-1532-41ab-870a-7e9ebef63d95" in namespace "configmap-2606" to be "Succeeded or Failed" May 17 00:53:20.322: INFO: Pod "pod-configmaps-e0b60d22-1532-41ab-870a-7e9ebef63d95": Phase="Pending", Reason="", readiness=false. Elapsed: 3.609086ms May 17 00:53:22.326: INFO: Pod "pod-configmaps-e0b60d22-1532-41ab-870a-7e9ebef63d95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007860845s May 17 00:53:24.330: INFO: Pod "pod-configmaps-e0b60d22-1532-41ab-870a-7e9ebef63d95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012360667s STEP: Saw pod success May 17 00:53:24.330: INFO: Pod "pod-configmaps-e0b60d22-1532-41ab-870a-7e9ebef63d95" satisfied condition "Succeeded or Failed" May 17 00:53:24.334: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e0b60d22-1532-41ab-870a-7e9ebef63d95 container configmap-volume-test: STEP: delete the pod May 17 00:53:24.359: INFO: Waiting for pod pod-configmaps-e0b60d22-1532-41ab-870a-7e9ebef63d95 to disappear May 17 00:53:24.432: INFO: Pod pod-configmaps-e0b60d22-1532-41ab-870a-7e9ebef63d95 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:53:24.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2606" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":227,"skipped":3738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:53:24.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-5e7c048f-5b3d-4804-bf11-d9fcc6f7ef51 STEP: Creating secret with name secret-projected-all-test-volume-b2e2e84d-f6a2-4d0d-9016-2c1a00e3cbf4 STEP: Creating a pod to test Check all projections for projected volume plugin May 17 00:53:24.544: INFO: Waiting up to 5m0s for pod "projected-volume-2b40633b-f280-4d0b-81db-9cc441e11fb0" in namespace "projected-3861" to be "Succeeded or Failed" May 17 00:53:24.578: INFO: Pod "projected-volume-2b40633b-f280-4d0b-81db-9cc441e11fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.732501ms May 17 00:53:26.584: INFO: Pod "projected-volume-2b40633b-f280-4d0b-81db-9cc441e11fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039836601s May 17 00:53:28.588: INFO: Pod "projected-volume-2b40633b-f280-4d0b-81db-9cc441e11fb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0440267s STEP: Saw pod success May 17 00:53:28.588: INFO: Pod "projected-volume-2b40633b-f280-4d0b-81db-9cc441e11fb0" satisfied condition "Succeeded or Failed" May 17 00:53:28.591: INFO: Trying to get logs from node latest-worker2 pod projected-volume-2b40633b-f280-4d0b-81db-9cc441e11fb0 container projected-all-volume-test: STEP: delete the pod May 17 00:53:28.649: INFO: Waiting for pod projected-volume-2b40633b-f280-4d0b-81db-9cc441e11fb0 to disappear May 17 00:53:28.675: INFO: Pod projected-volume-2b40633b-f280-4d0b-81db-9cc441e11fb0 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:53:28.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3861" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3767,"failed":0} ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:53:28.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:54:28.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3467" for this suite. • [SLOW TEST:60.093 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":229,"skipped":3767,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:54:28.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 17 00:54:28.998: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 17 00:54:29.023: INFO: Waiting for terminating namespaces to be deleted... May 17 00:54:29.026: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 17 00:54:29.035: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 17 00:54:29.036: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 17 00:54:29.036: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 17 00:54:29.036: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 17 00:54:29.036: INFO: test-webserver-20904573-05c6-4827-afbc-41acacb1a059 from container-probe-3467 started at 2020-05-17 00:53:28 +0000 UTC (1 container statuses recorded) May 17 00:54:29.036: INFO: Container test-webserver ready: false, restart count 0 May 17 00:54:29.036: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 17 00:54:29.036: INFO: Container kindnet-cni ready: true, restart count 0 May 17 00:54:29.036: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 17 00:54:29.036: INFO: Container kube-proxy ready: true, restart count 0 May 17 00:54:29.036: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 17 00:54:29.041: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 17 00:54:29.041: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 17 00:54:29.041: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 17 00:54:29.041: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 17 00:54:29.041: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 17 00:54:29.041: INFO: Container kindnet-cni ready: true, restart count 0 May 17 00:54:29.041: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 17 00:54:29.041: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-dde62531-db9f-46b5-b453-c0b39ca939d5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-dde62531-db9f-46b5-b453-c0b39ca939d5 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-dde62531-db9f-46b5-b453-c0b39ca939d5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:54:37.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3692" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.365 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":230,"skipped":3784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:54:37.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:54:48.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1467" for this suite. • [SLOW TEST:11.326 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":231,"skipped":3813,"failed":0} S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:54:48.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 17 00:54:55.207: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5b08389e-5b63-4b1b-b209-b6ea94c9ea45" May 17 00:54:55.207: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5b08389e-5b63-4b1b-b209-b6ea94c9ea45" in namespace "pods-6550" to be "terminated due to deadline exceeded" May 17 00:54:55.228: INFO: Pod "pod-update-activedeadlineseconds-5b08389e-5b63-4b1b-b209-b6ea94c9ea45": Phase="Running", Reason="", readiness=true. Elapsed: 20.503561ms May 17 00:54:57.232: INFO: Pod "pod-update-activedeadlineseconds-5b08389e-5b63-4b1b-b209-b6ea94c9ea45": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.024660423s May 17 00:54:57.232: INFO: Pod "pod-update-activedeadlineseconds-5b08389e-5b63-4b1b-b209-b6ea94c9ea45" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:54:57.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6550" for this suite. • [SLOW TEST:8.665 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3814,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:54:57.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:54:57.310: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82" in namespace "projected-1788" to be "Succeeded or Failed" May 17 00:54:57.319: INFO: Pod "downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56127ms May 17 00:54:59.595: INFO: Pod "downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284389699s May 17 00:55:01.599: INFO: Pod "downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82": Phase="Running", Reason="", readiness=true. Elapsed: 4.288630051s May 17 00:55:03.603: INFO: Pod "downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.293176693s STEP: Saw pod success May 17 00:55:03.603: INFO: Pod "downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82" satisfied condition "Succeeded or Failed" May 17 00:55:03.607: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82 container client-container: STEP: delete the pod May 17 00:55:03.653: INFO: Waiting for pod downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82 to disappear May 17 00:55:03.683: INFO: Pod downwardapi-volume-55f8e52a-004e-4065-b66e-c8dd3a2a4a82 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:55:03.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1788" for this suite. • [SLOW TEST:6.449 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":233,"skipped":3819,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:55:03.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 17 00:55:03.758: INFO: >>> kubeConfig: /root/.kube/config May 17 00:55:06.704: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:55:17.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9695" for this suite. • [SLOW TEST:13.727 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":234,"skipped":3880,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:55:17.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:55:17.555: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57" in namespace "downward-api-9660" to be "Succeeded or Failed" May 17 00:55:17.558: INFO: Pod "downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.468132ms May 17 00:55:19.562: INFO: Pod "downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006727345s May 17 00:55:21.567: INFO: Pod "downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57": Phase="Running", Reason="", readiness=true. Elapsed: 4.01152295s May 17 00:55:23.571: INFO: Pod "downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01594975s STEP: Saw pod success May 17 00:55:23.572: INFO: Pod "downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57" satisfied condition "Succeeded or Failed" May 17 00:55:23.574: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57 container client-container: STEP: delete the pod May 17 00:55:23.607: INFO: Waiting for pod downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57 to disappear May 17 00:55:23.618: INFO: Pod downwardapi-volume-bff8e574-3644-420f-a693-e4b337f86d57 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:55:23.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9660" for this suite. • [SLOW TEST:6.206 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":235,"skipped":3885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:55:23.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-7492e579-5370-446e-83c3-d75fd5e42480 STEP: Creating a pod to test consume secrets May 17 00:55:23.741: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01391ca0-b502-4366-9790-b4396cccb7f0" in namespace "projected-6316" to be "Succeeded or Failed" May 17 00:55:23.755: INFO: Pod "pod-projected-secrets-01391ca0-b502-4366-9790-b4396cccb7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.92565ms May 17 00:55:25.759: INFO: Pod "pod-projected-secrets-01391ca0-b502-4366-9790-b4396cccb7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018896261s May 17 00:55:27.764: INFO: Pod "pod-projected-secrets-01391ca0-b502-4366-9790-b4396cccb7f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023177737s STEP: Saw pod success May 17 00:55:27.764: INFO: Pod "pod-projected-secrets-01391ca0-b502-4366-9790-b4396cccb7f0" satisfied condition "Succeeded or Failed" May 17 00:55:27.767: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-01391ca0-b502-4366-9790-b4396cccb7f0 container projected-secret-volume-test: STEP: delete the pod May 17 00:55:27.819: INFO: Waiting for pod pod-projected-secrets-01391ca0-b502-4366-9790-b4396cccb7f0 to disappear May 17 00:55:27.822: INFO: Pod pod-projected-secrets-01391ca0-b502-4366-9790-b4396cccb7f0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:55:27.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6316" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3940,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:55:27.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-4f3403e7-f934-40df-b20f-28634b650916 STEP: Creating secret with name s-test-opt-upd-69a40aaa-86d0-4e8a-b769-40d5165b2605 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4f3403e7-f934-40df-b20f-28634b650916 STEP: Updating secret s-test-opt-upd-69a40aaa-86d0-4e8a-b769-40d5165b2605 STEP: Creating secret with name s-test-opt-create-327e28d1-0bc5-4b70-a723-bac31edd98c5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:55:38.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8421" for this suite. • [SLOW TEST:10.495 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3947,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:55:38.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-202c7388-7fe6-4076-9a67-ed0807def823 STEP: Creating a pod to test consume configMaps May 17 00:55:38.411: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff" in namespace "projected-3832" to be "Succeeded or Failed" May 17 00:55:38.432: INFO: Pod "pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff": Phase="Pending", Reason="", readiness=false. Elapsed: 20.297574ms May 17 00:55:40.436: INFO: Pod "pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024763812s May 17 00:55:42.441: INFO: Pod "pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff": Phase="Running", Reason="", readiness=true. Elapsed: 4.029060204s May 17 00:55:44.451: INFO: Pod "pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039069503s STEP: Saw pod success May 17 00:55:44.451: INFO: Pod "pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff" satisfied condition "Succeeded or Failed" May 17 00:55:44.453: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff container projected-configmap-volume-test: STEP: delete the pod May 17 00:55:44.481: INFO: Waiting for pod pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff to disappear May 17 00:55:44.622: INFO: Pod pod-projected-configmaps-58f24518-8a97-4e44-aa98-cf5d307784ff no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:55:44.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3832" for this suite. • [SLOW TEST:6.305 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":3966,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:55:44.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:55:44.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f87aa75b-4c2b-422a-bf04-f0cb44ef8b18" in namespace "projected-6316" to be "Succeeded or Failed" May 17 00:55:44.809: INFO: Pod "downwardapi-volume-f87aa75b-4c2b-422a-bf04-f0cb44ef8b18": Phase="Pending", Reason="", readiness=false. Elapsed: 3.369971ms May 17 00:55:46.983: INFO: Pod "downwardapi-volume-f87aa75b-4c2b-422a-bf04-f0cb44ef8b18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177717642s May 17 00:55:48.987: INFO: Pod "downwardapi-volume-f87aa75b-4c2b-422a-bf04-f0cb44ef8b18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181146546s STEP: Saw pod success May 17 00:55:48.987: INFO: Pod "downwardapi-volume-f87aa75b-4c2b-422a-bf04-f0cb44ef8b18" satisfied condition "Succeeded or Failed" May 17 00:55:48.989: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f87aa75b-4c2b-422a-bf04-f0cb44ef8b18 container client-container: STEP: delete the pod May 17 00:55:49.022: INFO: Waiting for pod downwardapi-volume-f87aa75b-4c2b-422a-bf04-f0cb44ef8b18 to disappear May 17 00:55:49.079: INFO: Pod downwardapi-volume-f87aa75b-4c2b-422a-bf04-f0cb44ef8b18 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:55:49.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6316" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":3967,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:55:49.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 17 00:55:53.551: INFO: Pod pod-hostip-e6498502-6c5c-4227-b355-b3a871b8b493 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:55:53.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4635" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3979,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:55:53.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-713.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-713.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 00:56:01.790: INFO: DNS probes using dns-713/dns-test-2bb7d228-2955-468c-8718-cdd870b2055c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:56:01.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-713" for this suite. • [SLOW TEST:8.311 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":241,"skipped":3985,"failed":0} [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:56:01.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 17 00:56:02.145: INFO: Waiting up to 5m0s for pod "pod-fbba4cb7-a45c-44ec-9966-452ef6e08247" in namespace "emptydir-7264" to be "Succeeded or Failed" May 17 00:56:02.344: INFO: Pod "pod-fbba4cb7-a45c-44ec-9966-452ef6e08247": Phase="Pending", Reason="", readiness=false. Elapsed: 198.23168ms May 17 00:56:04.347: INFO: Pod "pod-fbba4cb7-a45c-44ec-9966-452ef6e08247": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201102385s May 17 00:56:06.351: INFO: Pod "pod-fbba4cb7-a45c-44ec-9966-452ef6e08247": Phase="Running", Reason="", readiness=true. Elapsed: 4.205283345s May 17 00:56:08.355: INFO: Pod "pod-fbba4cb7-a45c-44ec-9966-452ef6e08247": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209410513s STEP: Saw pod success May 17 00:56:08.355: INFO: Pod "pod-fbba4cb7-a45c-44ec-9966-452ef6e08247" satisfied condition "Succeeded or Failed" May 17 00:56:08.357: INFO: Trying to get logs from node latest-worker2 pod pod-fbba4cb7-a45c-44ec-9966-452ef6e08247 container test-container: STEP: delete the pod May 17 00:56:08.391: INFO: Waiting for pod pod-fbba4cb7-a45c-44ec-9966-452ef6e08247 to disappear May 17 00:56:08.398: INFO: Pod pod-fbba4cb7-a45c-44ec-9966-452ef6e08247 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:56:08.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7264" for this suite. • [SLOW TEST:6.536 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":242,"skipped":3985,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:56:08.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 17 00:56:09.032: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 17 00:56:11.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273769, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273769, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273769, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725273769, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 00:56:14.222: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:56:14.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:56:15.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3563" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.130 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":243,"skipped":3987,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:56:15.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-25ad2090-b9d4-4974-b3c9-9cd7335a996c STEP: Creating a pod to test consume secrets May 17 00:56:15.657: INFO: Waiting up to 5m0s for pod "pod-secrets-2368dfe6-9aaa-44bc-9be6-1d6c11d7f5a4" in namespace "secrets-6134" to be "Succeeded or Failed" May 17 00:56:15.673: INFO: Pod "pod-secrets-2368dfe6-9aaa-44bc-9be6-1d6c11d7f5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.726475ms May 17 00:56:17.676: INFO: Pod "pod-secrets-2368dfe6-9aaa-44bc-9be6-1d6c11d7f5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019577794s May 17 00:56:19.680: INFO: Pod "pod-secrets-2368dfe6-9aaa-44bc-9be6-1d6c11d7f5a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023647078s STEP: Saw pod success May 17 00:56:19.681: INFO: Pod "pod-secrets-2368dfe6-9aaa-44bc-9be6-1d6c11d7f5a4" satisfied condition "Succeeded or Failed" May 17 00:56:19.684: INFO: Trying to get logs from node latest-worker pod pod-secrets-2368dfe6-9aaa-44bc-9be6-1d6c11d7f5a4 container secret-volume-test: STEP: delete the pod May 17 00:56:19.720: INFO: Waiting for pod pod-secrets-2368dfe6-9aaa-44bc-9be6-1d6c11d7f5a4 to disappear May 17 00:56:19.734: INFO: Pod pod-secrets-2368dfe6-9aaa-44bc-9be6-1d6c11d7f5a4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:56:19.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6134" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":244,"skipped":3995,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:56:19.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:56:24.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3987" for this suite. • [SLOW TEST:5.120 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":245,"skipped":4001,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:56:24.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-jdxw STEP: Creating a pod to test atomic-volume-subpath May 17 00:56:24.987: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jdxw" in namespace "subpath-614" to be "Succeeded or Failed" May 17 00:56:25.051: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Pending", Reason="", readiness=false. Elapsed: 64.110335ms May 17 00:56:27.056: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068572686s May 17 00:56:29.067: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 4.080131303s May 17 00:56:31.071: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 6.083861985s May 17 00:56:33.075: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 8.088446681s May 17 00:56:35.080: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 10.092921393s May 17 00:56:37.084: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 12.097063663s May 17 00:56:39.088: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 14.101053747s May 17 00:56:41.092: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 16.104965689s May 17 00:56:43.096: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 18.108630098s May 17 00:56:45.099: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 20.111656105s May 17 00:56:47.103: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Running", Reason="", readiness=true. Elapsed: 22.116325524s May 17 00:56:49.113: INFO: Pod "pod-subpath-test-configmap-jdxw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.125950782s STEP: Saw pod success May 17 00:56:49.113: INFO: Pod "pod-subpath-test-configmap-jdxw" satisfied condition "Succeeded or Failed" May 17 00:56:49.115: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-jdxw container test-container-subpath-configmap-jdxw: STEP: delete the pod May 17 00:56:49.144: INFO: Waiting for pod pod-subpath-test-configmap-jdxw to disappear May 17 00:56:49.148: INFO: Pod pod-subpath-test-configmap-jdxw no longer exists STEP: Deleting pod pod-subpath-test-configmap-jdxw May 17 00:56:49.148: INFO: Deleting pod "pod-subpath-test-configmap-jdxw" in namespace "subpath-614" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:56:49.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-614" for this suite. • [SLOW TEST:24.294 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":246,"skipped":4009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:56:49.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 17 00:56:57.151: INFO: 0 pods remaining May 17 00:56:57.151: INFO: 0 pods has nil DeletionTimestamp May 17 00:56:57.151: INFO: May 17 00:56:58.612: INFO: 0 pods remaining May 17 00:56:58.612: INFO: 0 pods has nil DeletionTimestamp May 17 00:56:58.612: INFO: STEP: Gathering metrics W0517 00:57:00.049272 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 00:57:00.049: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:57:00.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9095" for this suite. • [SLOW TEST:10.950 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":247,"skipped":4074,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:57:00.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 17 00:57:00.763: INFO: Waiting up to 5m0s for pod "client-containers-919248ab-1283-4c65-b989-19d945dd2313" in namespace "containers-8884" to be "Succeeded or Failed" May 17 00:57:00.767: INFO: Pod "client-containers-919248ab-1283-4c65-b989-19d945dd2313": Phase="Pending", Reason="", readiness=false. Elapsed: 3.974702ms May 17 00:57:02.960: INFO: Pod "client-containers-919248ab-1283-4c65-b989-19d945dd2313": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196618123s May 17 00:57:04.979: INFO: Pod "client-containers-919248ab-1283-4c65-b989-19d945dd2313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215249734s STEP: Saw pod success May 17 00:57:04.979: INFO: Pod "client-containers-919248ab-1283-4c65-b989-19d945dd2313" satisfied condition "Succeeded or Failed" May 17 00:57:04.981: INFO: Trying to get logs from node latest-worker2 pod client-containers-919248ab-1283-4c65-b989-19d945dd2313 container test-container: STEP: delete the pod May 17 00:57:04.999: INFO: Waiting for pod client-containers-919248ab-1283-4c65-b989-19d945dd2313 to disappear May 17 00:57:05.014: INFO: Pod client-containers-919248ab-1283-4c65-b989-19d945dd2313 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:57:05.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8884" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4076,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:57:05.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:57:05.111: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 17 00:57:10.136: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 17 00:57:10.136: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 17 00:57:10.236: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5747 /apis/apps/v1/namespaces/deployment-5747/deployments/test-cleanup-deployment 4bd8f819-25b3-47cd-b07b-6e9bd19deda1 5299208 1 2020-05-17 00:57:10 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-17 00:57:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005834898 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 17 00:57:10.240: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-5747 /apis/apps/v1/namespaces/deployment-5747/replicasets/test-cleanup-deployment-6688745694 18ef61ad-863c-4a96-8b89-da157503a9ad 5299210 1 2020-05-17 00:57:10 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 4bd8f819-25b3-47cd-b07b-6e9bd19deda1 0xc005983357 0xc005983358}] [] [{kube-controller-manager Update apps/v1 2020-05-17 00:57:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd8f819-25b3-47cd-b07b-6e9bd19deda1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0059833e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 17 00:57:10.240: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 17 00:57:10.240: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5747 /apis/apps/v1/namespaces/deployment-5747/replicasets/test-cleanup-controller 8c4b180f-f247-4afa-a9e2-93f19fceafa8 5299209 1 2020-05-17 00:57:05 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 4bd8f819-25b3-47cd-b07b-6e9bd19deda1 0xc00598321f 0xc005983240}] [] [{e2e.test Update apps/v1 2020-05-17 00:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-17 00:57:10 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"4bd8f819-25b3-47cd-b07b-6e9bd19deda1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0059832e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 17 00:57:10.281: INFO: Pod "test-cleanup-controller-wbrsd" is available: &Pod{ObjectMeta:{test-cleanup-controller-wbrsd test-cleanup-controller- deployment-5747 /api/v1/namespaces/deployment-5747/pods/test-cleanup-controller-wbrsd 5b3deb3a-bd34-4d4f-af32-8e6102a914af 5299194 0 2020-05-17 00:57:05 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 8c4b180f-f247-4afa-a9e2-93f19fceafa8 0xc005834c47 0xc005834c48}] [] [{kube-controller-manager Update v1 2020-05-17 00:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4b180f-f247-4afa-a9e2-93f19fceafa8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 00:57:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.236\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zxgfp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zxgfp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zxgfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:57:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:57:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:57:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 00:57:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.236,StartTime:2020-05-17 00:57:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-17 00:57:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ba117096b04b77d015f3ce9cb2fa96df939890d247f3ad1f75add74a9290a185,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 17 00:57:10.281: INFO: Pod "test-cleanup-deployment-6688745694-j2gnp" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-j2gnp test-cleanup-deployment-6688745694- deployment-5747 /api/v1/namespaces/deployment-5747/pods/test-cleanup-deployment-6688745694-j2gnp c82cef96-27a5-4e68-9a2f-e084fa103239 5299213 0 2020-05-17 00:57:10 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 18ef61ad-863c-4a96-8b89-da157503a9ad 0xc005834e07 0xc005834e08}] [] [{kube-controller-manager Update v1 2020-05-17 00:57:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18ef61ad-863c-4a96-8b89-da157503a9ad\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zxgfp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zxgfp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zxgfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:57:10.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5747" for this suite. • [SLOW TEST:5.351 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":249,"skipped":4092,"failed":0} [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:57:10.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4096.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4096.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 228.25.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.25.228_udp@PTR;check="$$(dig +tcp +noall +answer +search 228.25.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.25.228_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4096.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4096.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4096.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4096.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4096.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 228.25.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.25.228_udp@PTR;check="$$(dig +tcp +noall +answer +search 228.25.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.25.228_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 00:57:20.842: INFO: Unable to read wheezy_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:20.846: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:20.849: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:20.852: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:20.874: INFO: Unable to read jessie_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:20.877: INFO: Unable to read jessie_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:20.880: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:20.883: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:20.903: INFO: Lookups using dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745 failed for: [wheezy_udp@dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_udp@dns-test-service.dns-4096.svc.cluster.local jessie_tcp@dns-test-service.dns-4096.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local] May 17 00:57:25.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:25.913: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:25.917: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:25.920: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:25.943: INFO: Unable to read jessie_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:25.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:25.950: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:25.952: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:25.971: INFO: Lookups using dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745 failed for: [wheezy_udp@dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_udp@dns-test-service.dns-4096.svc.cluster.local jessie_tcp@dns-test-service.dns-4096.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local] May 17 00:57:30.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:30.911: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:30.915: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:30.918: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:30.943: INFO: Unable to read jessie_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:30.945: INFO: Unable to read jessie_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:30.948: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:30.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:30.967: INFO: Lookups using dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745 failed for: [wheezy_udp@dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_udp@dns-test-service.dns-4096.svc.cluster.local jessie_tcp@dns-test-service.dns-4096.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local] May 17 00:57:35.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:35.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:35.916: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:35.919: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:35.943: INFO: Unable to read jessie_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:35.956: INFO: Unable to read jessie_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:35.959: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:35.961: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:35.977: INFO: Lookups using dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745 failed for: [wheezy_udp@dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_udp@dns-test-service.dns-4096.svc.cluster.local jessie_tcp@dns-test-service.dns-4096.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local] May 17 00:57:40.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:40.913: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:40.916: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:40.920: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:40.947: INFO: Unable to read jessie_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:40.950: INFO: Unable to read jessie_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:40.952: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:40.954: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:40.974: INFO: Lookups using dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745 failed for: [wheezy_udp@dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_udp@dns-test-service.dns-4096.svc.cluster.local jessie_tcp@dns-test-service.dns-4096.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local] May 17 00:57:45.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:45.911: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:45.915: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:45.918: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:45.939: INFO: Unable to read jessie_udp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:45.942: INFO: Unable to read jessie_tcp@dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:45.944: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:45.947: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local from pod dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745: the server could not find the requested resource (get pods dns-test-3fd05520-e6f2-4511-b61a-5089969a2745) May 17 00:57:45.962: INFO: Lookups using dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745 failed for: [wheezy_udp@dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@dns-test-service.dns-4096.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_udp@dns-test-service.dns-4096.svc.cluster.local jessie_tcp@dns-test-service.dns-4096.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4096.svc.cluster.local] May 17 00:57:50.965: INFO: DNS probes using dns-4096/dns-test-3fd05520-e6f2-4511-b61a-5089969a2745 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:57:51.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4096" for this suite. • [SLOW TEST:41.423 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":250,"skipped":4092,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:57:51.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-516 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-516 May 17 00:57:51.922: INFO: Found 0 stateful pods, waiting for 1 May 17 00:58:01.927: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 17 00:58:01.962: INFO: Deleting all statefulset in ns statefulset-516 May 17 00:58:02.003: INFO: Scaling statefulset ss to 0 May 17 00:58:22.073: INFO: Waiting for statefulset status.replicas updated to 0 May 17 00:58:22.076: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:58:22.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-516" for this suite. • [SLOW TEST:30.301 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":251,"skipped":4108,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:58:22.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:58:22.157: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:58:28.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9755" for this suite. • [SLOW TEST:6.409 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":252,"skipped":4114,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:58:28.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 00:58:28.588: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f50ac922-3342-4081-b600-60cda6564c6f" in namespace "downward-api-3801" to be "Succeeded or Failed" May 17 00:58:28.606: INFO: Pod "downwardapi-volume-f50ac922-3342-4081-b600-60cda6564c6f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.750264ms May 17 00:58:30.609: INFO: Pod "downwardapi-volume-f50ac922-3342-4081-b600-60cda6564c6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021451552s May 17 00:58:32.614: INFO: Pod "downwardapi-volume-f50ac922-3342-4081-b600-60cda6564c6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026468239s STEP: Saw pod success May 17 00:58:32.614: INFO: Pod "downwardapi-volume-f50ac922-3342-4081-b600-60cda6564c6f" satisfied condition "Succeeded or Failed" May 17 00:58:32.618: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f50ac922-3342-4081-b600-60cda6564c6f container client-container: STEP: delete the pod May 17 00:58:32.669: INFO: Waiting for pod downwardapi-volume-f50ac922-3342-4081-b600-60cda6564c6f to disappear May 17 00:58:32.679: INFO: Pod downwardapi-volume-f50ac922-3342-4081-b600-60cda6564c6f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:58:32.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3801" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4123,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:58:32.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 17 00:58:32.872: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:58:32.878: INFO: Number of nodes with available pods: 0 May 17 00:58:32.878: INFO: Node latest-worker is running more than one daemon pod May 17 00:58:33.884: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:58:33.887: INFO: Number of nodes with available pods: 0 May 17 00:58:33.887: INFO: Node latest-worker is running more than one daemon pod May 17 00:58:34.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:58:34.923: INFO: Number of nodes with available pods: 0 May 17 00:58:34.923: INFO: Node latest-worker is running more than one daemon pod May 17 00:58:35.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:58:35.886: INFO: Number of nodes with available pods: 0 May 17 00:58:35.886: INFO: Node latest-worker is running more than one daemon pod May 17 00:58:36.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:58:36.887: INFO: Number of nodes with available pods: 2 May 17 00:58:36.887: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 17 00:58:36.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 00:58:36.935: INFO: Number of nodes with available pods: 2 May 17 00:58:36.935: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7129, will wait for the garbage collector to delete the pods May 17 00:58:38.136: INFO: Deleting DaemonSet.extensions daemon-set took: 5.275373ms May 17 00:58:38.437: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.436016ms May 17 00:58:42.362: INFO: Number of nodes with available pods: 0 May 17 00:58:42.362: INFO: Number of running nodes: 0, number of available pods: 0 May 17 00:58:42.364: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7129/daemonsets","resourceVersion":"5299807"},"items":null} May 17 00:58:42.367: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7129/pods","resourceVersion":"5299807"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:58:42.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7129" for this suite. • [SLOW TEST:9.698 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":254,"skipped":4144,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:58:42.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-96513dbe-a16c-4cc4-bb19-2dbc766c2984 STEP: Creating a pod to test consume configMaps May 17 00:58:42.461: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78854a4e-dca1-4e47-8761-884d9edfd7f0" in namespace "projected-5822" to be "Succeeded or Failed" May 17 00:58:42.499: INFO: Pod "pod-projected-configmaps-78854a4e-dca1-4e47-8761-884d9edfd7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.475675ms May 17 00:58:44.503: INFO: Pod "pod-projected-configmaps-78854a4e-dca1-4e47-8761-884d9edfd7f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042222788s May 17 00:58:46.602: INFO: Pod "pod-projected-configmaps-78854a4e-dca1-4e47-8761-884d9edfd7f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141332512s STEP: Saw pod success May 17 00:58:46.602: INFO: Pod "pod-projected-configmaps-78854a4e-dca1-4e47-8761-884d9edfd7f0" satisfied condition "Succeeded or Failed" May 17 00:58:46.606: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-78854a4e-dca1-4e47-8761-884d9edfd7f0 container projected-configmap-volume-test: STEP: delete the pod May 17 00:58:46.690: INFO: Waiting for pod pod-projected-configmaps-78854a4e-dca1-4e47-8761-884d9edfd7f0 to disappear May 17 00:58:46.757: INFO: Pod pod-projected-configmaps-78854a4e-dca1-4e47-8761-884d9edfd7f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:58:46.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5822" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4147,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:58:46.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:59:03.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7142" for this suite. • [SLOW TEST:16.265 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":256,"skipped":4167,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:59:03.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:59:03.189: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d64a2dc8-bb5b-4889-983e-1f59df5c4a3f", Controller:(*bool)(0xc004fd6792), BlockOwnerDeletion:(*bool)(0xc004fd6793)}} May 17 00:59:03.208: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"26eaea15-a9f6-44fd-a2e8-dcd1bd109372", Controller:(*bool)(0xc003bd8722), BlockOwnerDeletion:(*bool)(0xc003bd8723)}} May 17 00:59:03.285: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c9ecd89d-3cb6-4bdd-9c99-17fa6d7935a3", Controller:(*bool)(0xc003bd88ea), BlockOwnerDeletion:(*bool)(0xc003bd88eb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:59:08.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9595" for this suite. • [SLOW TEST:5.340 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":257,"skipped":4182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:59:08.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 17 00:59:08.559: INFO: Pod name pod-release: Found 0 pods out of 1 May 17 00:59:13.571: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:59:13.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-952" for this suite. • [SLOW TEST:5.663 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":258,"skipped":4228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:59:14.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 00:59:14.176: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0e134d7f-a591-4703-ac67-5fbb1ce8f196" in namespace "security-context-test-8489" to be "Succeeded or Failed" May 17 00:59:14.220: INFO: Pod "alpine-nnp-false-0e134d7f-a591-4703-ac67-5fbb1ce8f196": Phase="Pending", Reason="", readiness=false. Elapsed: 43.657221ms May 17 00:59:16.224: INFO: Pod "alpine-nnp-false-0e134d7f-a591-4703-ac67-5fbb1ce8f196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047364561s May 17 00:59:18.232: INFO: Pod "alpine-nnp-false-0e134d7f-a591-4703-ac67-5fbb1ce8f196": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055857203s May 17 00:59:20.245: INFO: Pod "alpine-nnp-false-0e134d7f-a591-4703-ac67-5fbb1ce8f196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069227267s May 17 00:59:20.246: INFO: Pod "alpine-nnp-false-0e134d7f-a591-4703-ac67-5fbb1ce8f196" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:59:20.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8489" for this suite. • [SLOW TEST:6.232 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4262,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:59:20.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e25d4861-74da-45f4-a6ba-2a03505dd645 STEP: Creating a pod to test consume configMaps May 17 00:59:20.588: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c37cdaf1-7c46-4cab-85c0-87e04b5ef901" in namespace "projected-4395" to be "Succeeded or Failed" May 17 00:59:20.591: INFO: Pod "pod-projected-configmaps-c37cdaf1-7c46-4cab-85c0-87e04b5ef901": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734691ms May 17 00:59:22.595: INFO: Pod "pod-projected-configmaps-c37cdaf1-7c46-4cab-85c0-87e04b5ef901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007451172s May 17 00:59:24.599: INFO: Pod "pod-projected-configmaps-c37cdaf1-7c46-4cab-85c0-87e04b5ef901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011082887s STEP: Saw pod success May 17 00:59:24.599: INFO: Pod "pod-projected-configmaps-c37cdaf1-7c46-4cab-85c0-87e04b5ef901" satisfied condition "Succeeded or Failed" May 17 00:59:24.601: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c37cdaf1-7c46-4cab-85c0-87e04b5ef901 container projected-configmap-volume-test: STEP: delete the pod May 17 00:59:24.902: INFO: Waiting for pod pod-projected-configmaps-c37cdaf1-7c46-4cab-85c0-87e04b5ef901 to disappear May 17 00:59:24.912: INFO: Pod pod-projected-configmaps-c37cdaf1-7c46-4cab-85c0-87e04b5ef901 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 00:59:24.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4395" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":260,"skipped":4281,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 00:59:24.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-3d7ccc21-b159-41c1-b9b3-bb29b9cc9785 in namespace container-probe-3845 May 17 00:59:29.002: INFO: Started pod test-webserver-3d7ccc21-b159-41c1-b9b3-bb29b9cc9785 in namespace container-probe-3845 STEP: checking the pod's current state and verifying that restartCount is present May 17 00:59:29.027: INFO: Initial restart count of pod test-webserver-3d7ccc21-b159-41c1-b9b3-bb29b9cc9785 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:03:30.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3845" for this suite. • [SLOW TEST:245.191 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":261,"skipped":4287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:03:30.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-d960e794-a3b8-413a-94b8-a105f6dd7697 May 17 01:03:30.616: INFO: Pod name my-hostname-basic-d960e794-a3b8-413a-94b8-a105f6dd7697: Found 0 pods out of 1 May 17 01:03:35.623: INFO: Pod name my-hostname-basic-d960e794-a3b8-413a-94b8-a105f6dd7697: Found 1 pods out of 1 May 17 01:03:35.623: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d960e794-a3b8-413a-94b8-a105f6dd7697" are running May 17 01:03:35.626: INFO: Pod "my-hostname-basic-d960e794-a3b8-413a-94b8-a105f6dd7697-s7tsj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 01:03:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 01:03:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 01:03:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 01:03:30 +0000 UTC Reason: Message:}]) May 17 01:03:35.627: INFO: Trying to dial the pod May 17 01:03:40.639: INFO: Controller my-hostname-basic-d960e794-a3b8-413a-94b8-a105f6dd7697: Got expected result from replica 1 [my-hostname-basic-d960e794-a3b8-413a-94b8-a105f6dd7697-s7tsj]: "my-hostname-basic-d960e794-a3b8-413a-94b8-a105f6dd7697-s7tsj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:03:40.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-900" for this suite. • [SLOW TEST:10.516 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":262,"skipped":4357,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:03:40.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-pgz7 STEP: Creating a pod to test atomic-volume-subpath May 17 01:03:40.768: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pgz7" in namespace "subpath-6478" to be "Succeeded or Failed" May 17 01:03:40.772: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814484ms May 17 01:03:42.776: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007748622s May 17 01:03:44.785: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 4.016672579s May 17 01:03:46.790: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 6.02164253s May 17 01:03:48.793: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 8.02558716s May 17 01:03:50.798: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 10.029679628s May 17 01:03:52.802: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 12.033741213s May 17 01:03:54.805: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 14.037523901s May 17 01:03:56.810: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 16.042228065s May 17 01:03:58.814: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 18.046518177s May 17 01:04:00.819: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 20.051094995s May 17 01:04:02.823: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Running", Reason="", readiness=true. Elapsed: 22.055078658s May 17 01:04:04.826: INFO: Pod "pod-subpath-test-downwardapi-pgz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058521791s STEP: Saw pod success May 17 01:04:04.826: INFO: Pod "pod-subpath-test-downwardapi-pgz7" satisfied condition "Succeeded or Failed" May 17 01:04:04.829: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-pgz7 container test-container-subpath-downwardapi-pgz7: STEP: delete the pod May 17 01:04:04.879: INFO: Waiting for pod pod-subpath-test-downwardapi-pgz7 to disappear May 17 01:04:04.890: INFO: Pod pod-subpath-test-downwardapi-pgz7 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-pgz7 May 17 01:04:04.890: INFO: Deleting pod "pod-subpath-test-downwardapi-pgz7" in namespace "subpath-6478" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:04:04.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6478" for this suite. • [SLOW TEST:24.249 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":263,"skipped":4378,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:04:04.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-bea81553-6cad-44f5-99c9-3ec05b6a52a9 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-bea81553-6cad-44f5-99c9-3ec05b6a52a9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:05:35.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-620" for this suite. • [SLOW TEST:90.683 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4390,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:05:35.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-szrk STEP: Creating a pod to test atomic-volume-subpath May 17 01:05:35.701: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-szrk" in namespace "subpath-9252" to be "Succeeded or Failed" May 17 01:05:35.708: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Pending", Reason="", readiness=false. Elapsed: 7.28182ms May 17 01:05:37.713: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011867491s May 17 01:05:39.716: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 4.015148093s May 17 01:05:41.721: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 6.020211441s May 17 01:05:43.726: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 8.024876674s May 17 01:05:45.730: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 10.029042824s May 17 01:05:47.773: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 12.071563005s May 17 01:05:49.785: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 14.083856284s May 17 01:05:51.790: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 16.088760356s May 17 01:05:53.794: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 18.092952551s May 17 01:05:55.798: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 20.097448709s May 17 01:05:57.802: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 22.101289885s May 17 01:05:59.807: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Running", Reason="", readiness=true. Elapsed: 24.105839421s May 17 01:06:01.811: INFO: Pod "pod-subpath-test-configmap-szrk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.110217591s STEP: Saw pod success May 17 01:06:01.811: INFO: Pod "pod-subpath-test-configmap-szrk" satisfied condition "Succeeded or Failed" May 17 01:06:01.814: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-szrk container test-container-subpath-configmap-szrk: STEP: delete the pod May 17 01:06:01.862: INFO: Waiting for pod pod-subpath-test-configmap-szrk to disappear May 17 01:06:01.876: INFO: Pod pod-subpath-test-configmap-szrk no longer exists STEP: Deleting pod pod-subpath-test-configmap-szrk May 17 01:06:01.876: INFO: Deleting pod "pod-subpath-test-configmap-szrk" in namespace "subpath-9252" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:06:01.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9252" for this suite. • [SLOW TEST:26.306 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":265,"skipped":4393,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:06:01.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 01:06:01.954: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:06:06.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-861" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":266,"skipped":4415,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:06:06.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-d1aa8446-4d98-4efb-a899-06467e7a200a in namespace container-probe-4978 May 17 01:06:10.215: INFO: Started pod busybox-d1aa8446-4d98-4efb-a899-06467e7a200a in namespace container-probe-4978 STEP: checking the pod's current state and verifying that restartCount is present May 17 01:06:10.218: INFO: Initial restart count of pod busybox-d1aa8446-4d98-4efb-a899-06467e7a200a is 0 May 17 01:07:06.562: INFO: Restart count of pod container-probe-4978/busybox-d1aa8446-4d98-4efb-a899-06467e7a200a is now 1 (56.34423878s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:07:06.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4978" for this suite. • [SLOW TEST:60.525 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4437,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:07:06.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 17 01:07:06.700: INFO: Waiting up to 5m0s for pod "pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972" in namespace "emptydir-506" to be "Succeeded or Failed" May 17 01:07:06.755: INFO: Pod "pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972": Phase="Pending", Reason="", readiness=false. Elapsed: 55.217461ms May 17 01:07:08.760: INFO: Pod "pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05943828s May 17 01:07:10.763: INFO: Pod "pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063101393s May 17 01:07:12.767: INFO: Pod "pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067090477s STEP: Saw pod success May 17 01:07:12.767: INFO: Pod "pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972" satisfied condition "Succeeded or Failed" May 17 01:07:12.770: INFO: Trying to get logs from node latest-worker2 pod pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972 container test-container: STEP: delete the pod May 17 01:07:12.789: INFO: Waiting for pod pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972 to disappear May 17 01:07:12.820: INFO: Pod pod-45e526a3-6d58-4fb1-8f7e-b41232c6a972 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:07:12.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-506" for this suite. • [SLOW TEST:6.195 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4450,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:07:12.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3964.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3964.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3964.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3964.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3964.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3964.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 01:07:18.979: INFO: DNS probes using dns-3964/dns-test-26e64ace-74a2-4ae3-bd72-c3119e77a871 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:07:19.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3964" for this suite. • [SLOW TEST:6.259 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":269,"skipped":4459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:07:19.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 01:07:19.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a6dee14-719f-4f48-a24a-2ae6724c4abb" in namespace "projected-4009" to be "Succeeded or Failed" May 17 01:07:19.379: INFO: Pod "downwardapi-volume-8a6dee14-719f-4f48-a24a-2ae6724c4abb": Phase="Pending", Reason="", readiness=false. Elapsed: 233.07157ms May 17 01:07:21.383: INFO: Pod "downwardapi-volume-8a6dee14-719f-4f48-a24a-2ae6724c4abb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237705682s May 17 01:07:23.388: INFO: Pod "downwardapi-volume-8a6dee14-719f-4f48-a24a-2ae6724c4abb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.242141409s STEP: Saw pod success May 17 01:07:23.388: INFO: Pod "downwardapi-volume-8a6dee14-719f-4f48-a24a-2ae6724c4abb" satisfied condition "Succeeded or Failed" May 17 01:07:23.390: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8a6dee14-719f-4f48-a24a-2ae6724c4abb container client-container: STEP: delete the pod May 17 01:07:23.585: INFO: Waiting for pod downwardapi-volume-8a6dee14-719f-4f48-a24a-2ae6724c4abb to disappear May 17 01:07:23.719: INFO: Pod downwardapi-volume-8a6dee14-719f-4f48-a24a-2ae6724c4abb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:07:23.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4009" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4483,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:07:23.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 01:07:23.848: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63cfea65-1d40-4ae7-a2fb-384478c9cd73" in namespace "downward-api-3190" to be "Succeeded or Failed" May 17 01:07:23.859: INFO: Pod "downwardapi-volume-63cfea65-1d40-4ae7-a2fb-384478c9cd73": Phase="Pending", Reason="", readiness=false. Elapsed: 10.303096ms May 17 01:07:25.863: INFO: Pod "downwardapi-volume-63cfea65-1d40-4ae7-a2fb-384478c9cd73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014980778s May 17 01:07:27.867: INFO: Pod "downwardapi-volume-63cfea65-1d40-4ae7-a2fb-384478c9cd73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01876831s STEP: Saw pod success May 17 01:07:27.867: INFO: Pod "downwardapi-volume-63cfea65-1d40-4ae7-a2fb-384478c9cd73" satisfied condition "Succeeded or Failed" May 17 01:07:27.870: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-63cfea65-1d40-4ae7-a2fb-384478c9cd73 container client-container: STEP: delete the pod May 17 01:07:27.918: INFO: Waiting for pod downwardapi-volume-63cfea65-1d40-4ae7-a2fb-384478c9cd73 to disappear May 17 01:07:27.932: INFO: Pod downwardapi-volume-63cfea65-1d40-4ae7-a2fb-384478c9cd73 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:07:27.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3190" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":271,"skipped":4494,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:07:27.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4246ed50-fcc9-403a-b71e-4204bf0aea81 STEP: Creating a pod to test consume secrets May 17 01:07:28.072: INFO: Waiting up to 5m0s for pod "pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb" in namespace "secrets-2068" to be "Succeeded or Failed" May 17 01:07:28.133: INFO: Pod "pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 61.151696ms May 17 01:07:30.284: INFO: Pod "pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211911594s May 17 01:07:32.288: INFO: Pod "pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb": Phase="Running", Reason="", readiness=true. Elapsed: 4.216211511s May 17 01:07:34.292: INFO: Pod "pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.219575384s STEP: Saw pod success May 17 01:07:34.292: INFO: Pod "pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb" satisfied condition "Succeeded or Failed" May 17 01:07:34.295: INFO: Trying to get logs from node latest-worker pod pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb container secret-volume-test: STEP: delete the pod May 17 01:07:34.343: INFO: Waiting for pod pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb to disappear May 17 01:07:34.351: INFO: Pod pod-secrets-373e3681-2292-4459-8ce8-8155bd55a1fb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:07:34.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2068" for this suite. • [SLOW TEST:6.407 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":272,"skipped":4514,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:07:34.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 17 01:07:34.406: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:07:49.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8613" for this suite. • [SLOW TEST:15.561 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":273,"skipped":4524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:07:49.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3779 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3779 I0517 01:07:50.088836 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3779, replica count: 2 I0517 01:07:53.139285 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 01:07:56.139493 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 01:07:56.139: INFO: Creating new exec pod May 17 01:08:01.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpodhdkhs -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 17 01:08:04.252: INFO: stderr: "I0517 01:08:04.119949 4286 log.go:172] (0xc000d862c0) (0xc000860e60) Create stream\nI0517 01:08:04.119995 4286 log.go:172] (0xc000d862c0) (0xc000860e60) Stream added, broadcasting: 1\nI0517 01:08:04.122329 4286 log.go:172] (0xc000d862c0) Reply frame received for 1\nI0517 01:08:04.122371 4286 log.go:172] (0xc000d862c0) (0xc0004a05a0) Create stream\nI0517 01:08:04.122385 4286 log.go:172] (0xc000d862c0) (0xc0004a05a0) Stream added, broadcasting: 3\nI0517 01:08:04.123416 4286 log.go:172] (0xc000d862c0) Reply frame received for 3\nI0517 01:08:04.123446 4286 log.go:172] (0xc000d862c0) (0xc000846460) Create stream\nI0517 01:08:04.123462 4286 log.go:172] (0xc000d862c0) (0xc000846460) Stream added, broadcasting: 5\nI0517 01:08:04.124250 4286 log.go:172] (0xc000d862c0) Reply frame received for 5\nI0517 01:08:04.223322 4286 log.go:172] (0xc000d862c0) Data frame received for 5\nI0517 01:08:04.223351 4286 log.go:172] (0xc000846460) (5) Data frame handling\nI0517 01:08:04.223368 4286 log.go:172] (0xc000846460) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0517 01:08:04.244037 4286 log.go:172] (0xc000d862c0) Data frame received for 5\nI0517 01:08:04.244086 4286 log.go:172] (0xc000846460) (5) Data frame handling\nI0517 01:08:04.244120 4286 log.go:172] (0xc000846460) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0517 01:08:04.244412 4286 log.go:172] (0xc000d862c0) Data frame received for 5\nI0517 01:08:04.244464 4286 log.go:172] (0xc000d862c0) Data frame received for 3\nI0517 01:08:04.244500 4286 log.go:172] (0xc0004a05a0) (3) Data frame handling\nI0517 01:08:04.244534 4286 log.go:172] (0xc000846460) (5) Data frame handling\nI0517 01:08:04.245999 4286 log.go:172] (0xc000d862c0) Data frame received for 1\nI0517 01:08:04.246038 4286 log.go:172] (0xc000860e60) (1) Data frame handling\nI0517 01:08:04.246063 4286 log.go:172] (0xc000860e60) (1) Data frame sent\nI0517 01:08:04.246103 4286 log.go:172] (0xc000d862c0) (0xc000860e60) Stream removed, broadcasting: 1\nI0517 01:08:04.246134 4286 log.go:172] (0xc000d862c0) Go away received\nI0517 01:08:04.246722 4286 log.go:172] (0xc000d862c0) (0xc000860e60) Stream removed, broadcasting: 1\nI0517 01:08:04.246744 4286 log.go:172] (0xc000d862c0) (0xc0004a05a0) Stream removed, broadcasting: 3\nI0517 01:08:04.246754 4286 log.go:172] (0xc000d862c0) (0xc000846460) Stream removed, broadcasting: 5\n" May 17 01:08:04.253: INFO: stdout: "" May 17 01:08:04.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpodhdkhs -- /bin/sh -x -c nc -zv -t -w 2 10.107.197.230 80' May 17 01:08:04.491: INFO: stderr: "I0517 01:08:04.386063 4319 log.go:172] (0xc000adb6b0) (0xc000863e00) Create stream\nI0517 01:08:04.386114 4319 log.go:172] (0xc000adb6b0) (0xc000863e00) Stream added, broadcasting: 1\nI0517 01:08:04.389093 4319 log.go:172] (0xc000adb6b0) Reply frame received for 1\nI0517 01:08:04.389307 4319 log.go:172] (0xc000adb6b0) (0xc00086cdc0) Create stream\nI0517 01:08:04.389348 4319 log.go:172] (0xc000adb6b0) (0xc00086cdc0) Stream added, broadcasting: 3\nI0517 01:08:04.390974 4319 log.go:172] (0xc000adb6b0) Reply frame received for 3\nI0517 01:08:04.391026 4319 log.go:172] (0xc000adb6b0) (0xc000850be0) Create stream\nI0517 01:08:04.391042 4319 log.go:172] (0xc000adb6b0) (0xc000850be0) Stream added, broadcasting: 5\nI0517 01:08:04.391967 4319 log.go:172] (0xc000adb6b0) Reply frame received for 5\nI0517 01:08:04.482561 4319 log.go:172] (0xc000adb6b0) Data frame received for 5\nI0517 01:08:04.482606 4319 log.go:172] (0xc000adb6b0) Data frame received for 3\nI0517 01:08:04.482634 4319 log.go:172] (0xc00086cdc0) (3) Data frame handling\nI0517 01:08:04.482662 4319 log.go:172] (0xc000850be0) (5) Data frame handling\nI0517 01:08:04.482683 4319 log.go:172] (0xc000850be0) (5) Data frame sent\nI0517 01:08:04.482701 4319 log.go:172] (0xc000adb6b0) Data frame received for 5\nI0517 01:08:04.482717 4319 log.go:172] (0xc000850be0) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.197.230 80\nConnection to 10.107.197.230 80 port [tcp/http] succeeded!\nI0517 01:08:04.485422 4319 log.go:172] (0xc000adb6b0) Data frame received for 1\nI0517 01:08:04.485447 4319 log.go:172] (0xc000863e00) (1) Data frame handling\nI0517 01:08:04.485472 4319 log.go:172] (0xc000863e00) (1) Data frame sent\nI0517 01:08:04.485765 4319 log.go:172] (0xc000adb6b0) (0xc000863e00) Stream removed, broadcasting: 1\nI0517 01:08:04.485815 4319 log.go:172] (0xc000adb6b0) Go away received\nI0517 01:08:04.486087 4319 log.go:172] (0xc000adb6b0) (0xc000863e00) Stream removed, broadcasting: 1\nI0517 01:08:04.486108 4319 log.go:172] (0xc000adb6b0) (0xc00086cdc0) Stream removed, broadcasting: 3\nI0517 01:08:04.486118 4319 log.go:172] (0xc000adb6b0) (0xc000850be0) Stream removed, broadcasting: 5\n" May 17 01:08:04.491: INFO: stdout: "" May 17 01:08:04.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpodhdkhs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32550' May 17 01:08:04.697: INFO: stderr: "I0517 01:08:04.622431 4339 log.go:172] (0xc000974000) (0xc00044cc80) Create stream\nI0517 01:08:04.622506 4339 log.go:172] (0xc000974000) (0xc00044cc80) Stream added, broadcasting: 1\nI0517 01:08:04.624481 4339 log.go:172] (0xc000974000) Reply frame received for 1\nI0517 01:08:04.624526 4339 log.go:172] (0xc000974000) (0xc00044d400) Create stream\nI0517 01:08:04.624545 4339 log.go:172] (0xc000974000) (0xc00044d400) Stream added, broadcasting: 3\nI0517 01:08:04.625680 4339 log.go:172] (0xc000974000) Reply frame received for 3\nI0517 01:08:04.625714 4339 log.go:172] (0xc000974000) (0xc0004301e0) Create stream\nI0517 01:08:04.625727 4339 log.go:172] (0xc000974000) (0xc0004301e0) Stream added, broadcasting: 5\nI0517 01:08:04.626658 4339 log.go:172] (0xc000974000) Reply frame received for 5\nI0517 01:08:04.691028 4339 log.go:172] (0xc000974000) Data frame received for 5\nI0517 01:08:04.691059 4339 log.go:172] (0xc0004301e0) (5) Data frame handling\nI0517 01:08:04.691070 4339 log.go:172] (0xc0004301e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32550\nConnection to 172.17.0.13 32550 port [tcp/32550] succeeded!\nI0517 01:08:04.691510 4339 log.go:172] (0xc000974000) Data frame received for 3\nI0517 01:08:04.691542 4339 log.go:172] (0xc00044d400) (3) Data frame handling\nI0517 01:08:04.691567 4339 log.go:172] (0xc000974000) Data frame received for 5\nI0517 01:08:04.691577 4339 log.go:172] (0xc0004301e0) (5) Data frame handling\nI0517 01:08:04.693741 4339 log.go:172] (0xc000974000) Data frame received for 1\nI0517 01:08:04.693761 4339 log.go:172] (0xc00044cc80) (1) Data frame handling\nI0517 01:08:04.693775 4339 log.go:172] (0xc00044cc80) (1) Data frame sent\nI0517 01:08:04.693790 4339 log.go:172] (0xc000974000) (0xc00044cc80) Stream removed, broadcasting: 1\nI0517 01:08:04.693807 4339 log.go:172] (0xc000974000) Go away received\nI0517 01:08:04.694038 4339 log.go:172] (0xc000974000) (0xc00044cc80) Stream removed, broadcasting: 1\nI0517 01:08:04.694049 4339 log.go:172] (0xc000974000) (0xc00044d400) Stream removed, broadcasting: 3\nI0517 01:08:04.694055 4339 log.go:172] (0xc000974000) (0xc0004301e0) Stream removed, broadcasting: 5\n" May 17 01:08:04.697: INFO: stdout: "" May 17 01:08:04.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpodhdkhs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32550' May 17 01:08:04.945: INFO: stderr: "I0517 01:08:04.837107 4358 log.go:172] (0xc000a171e0) (0xc000a6a280) Create stream\nI0517 01:08:04.837422 4358 log.go:172] (0xc000a171e0) (0xc000a6a280) Stream added, broadcasting: 1\nI0517 01:08:04.841626 4358 log.go:172] (0xc000a171e0) Reply frame received for 1\nI0517 01:08:04.841667 4358 log.go:172] (0xc000a171e0) (0xc000722f00) Create stream\nI0517 01:08:04.841686 4358 log.go:172] (0xc000a171e0) (0xc000722f00) Stream added, broadcasting: 3\nI0517 01:08:04.842316 4358 log.go:172] (0xc000a171e0) Reply frame received for 3\nI0517 01:08:04.842343 4358 log.go:172] (0xc000a171e0) (0xc000566280) Create stream\nI0517 01:08:04.842355 4358 log.go:172] (0xc000a171e0) (0xc000566280) Stream added, broadcasting: 5\nI0517 01:08:04.843064 4358 log.go:172] (0xc000a171e0) Reply frame received for 5\nI0517 01:08:04.939345 4358 log.go:172] (0xc000a171e0) Data frame received for 3\nI0517 01:08:04.939475 4358 log.go:172] (0xc000722f00) (3) Data frame handling\nI0517 01:08:04.939579 4358 log.go:172] (0xc000a171e0) Data frame received for 5\nI0517 01:08:04.939603 4358 log.go:172] (0xc000566280) (5) Data frame handling\nI0517 01:08:04.939616 4358 log.go:172] (0xc000566280) (5) Data frame sent\nI0517 01:08:04.939628 4358 log.go:172] (0xc000a171e0) Data frame received for 5\nI0517 01:08:04.939640 4358 log.go:172] (0xc000566280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32550\nConnection to 172.17.0.12 32550 port [tcp/32550] succeeded!\nI0517 01:08:04.940737 4358 log.go:172] (0xc000a171e0) Data frame received for 1\nI0517 01:08:04.940796 4358 log.go:172] (0xc000a6a280) (1) Data frame handling\nI0517 01:08:04.940826 4358 log.go:172] (0xc000a6a280) (1) Data frame sent\nI0517 01:08:04.940853 4358 log.go:172] (0xc000a171e0) (0xc000a6a280) Stream removed, broadcasting: 1\nI0517 01:08:04.940965 4358 log.go:172] (0xc000a171e0) Go away received\nI0517 01:08:04.941368 4358 log.go:172] (0xc000a171e0) (0xc000a6a280) Stream removed, broadcasting: 1\nI0517 01:08:04.941392 4358 log.go:172] (0xc000a171e0) (0xc000722f00) Stream removed, broadcasting: 3\nI0517 01:08:04.941401 4358 log.go:172] (0xc000a171e0) (0xc000566280) Stream removed, broadcasting: 5\n" May 17 01:08:04.945: INFO: stdout: "" May 17 01:08:04.945: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:08:05.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3779" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:15.137 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":274,"skipped":4552,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:08:05.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 01:08:06.069: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 01:08:08.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274486, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274486, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 01:08:10.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274486, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274486, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274486, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 01:08:13.259: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:08:23.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1161" for this suite. STEP: Destroying namespace "webhook-1161-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.553 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":275,"skipped":4556,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:08:23.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 17 01:08:28.339: INFO: Successfully updated pod "labelsupdate94dc5b99-1a2b-458e-9b01-0501226f0754" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:08:30.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-650" for this suite. • [SLOW TEST:6.772 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":276,"skipped":4576,"failed":0} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:08:30.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9847, will wait for the garbage collector to delete the pods May 17 01:08:36.519: INFO: Deleting Job.batch foo took: 6.666027ms May 17 01:08:36.820: INFO: Terminating Job.batch foo pods took: 300.280764ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:09:15.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9847" for this suite. • [SLOW TEST:44.948 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":277,"skipped":4577,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:09:15.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 17 01:09:15.375: INFO: Creating deployment "test-recreate-deployment" May 17 01:09:15.384: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 17 01:09:15.446: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 17 01:09:17.452: INFO: Waiting deployment "test-recreate-deployment" to complete May 17 01:09:17.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274555, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274555, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274555, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274555, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 01:09:19.460: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 17 01:09:19.468: INFO: Updating deployment test-recreate-deployment May 17 01:09:19.468: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 17 01:09:20.141: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-451 /apis/apps/v1/namespaces/deployment-451/deployments/test-recreate-deployment 658d6241-22ba-4b6e-8f83-e2f2de8a8c8e 5302498 2 2020-05-17 01:09:15 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-17 01:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-17 01:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a6a988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-17 01:09:19 +0000 UTC,LastTransitionTime:2020-05-17 01:09:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-17 01:09:19 +0000 UTC,LastTransitionTime:2020-05-17 01:09:15 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 17 01:09:20.144: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-451 /apis/apps/v1/namespaces/deployment-451/replicasets/test-recreate-deployment-d5667d9c7 6563886b-11df-499c-90bd-ad8c0b2d5f83 5302496 1 2020-05-17 01:09:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 658d6241-22ba-4b6e-8f83-e2f2de8a8c8e 0xc003a6afa0 0xc003a6afa1}] [] [{kube-controller-manager Update apps/v1 2020-05-17 01:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"658d6241-22ba-4b6e-8f83-e2f2de8a8c8e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a6b018 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 17 01:09:20.144: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 17 01:09:20.145: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-451 /apis/apps/v1/namespaces/deployment-451/replicasets/test-recreate-deployment-6d65b9f6d8 0d22594a-c7a5-4dc3-8cc6-1de682a71345 5302487 2 2020-05-17 01:09:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 658d6241-22ba-4b6e-8f83-e2f2de8a8c8e 0xc003a6ae97 0xc003a6ae98}] [] [{kube-controller-manager Update apps/v1 2020-05-17 01:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"658d6241-22ba-4b6e-8f83-e2f2de8a8c8e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a6af38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 17 01:09:20.166: INFO: Pod "test-recreate-deployment-d5667d9c7-b4z5b" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-b4z5b test-recreate-deployment-d5667d9c7- deployment-451 /api/v1/namespaces/deployment-451/pods/test-recreate-deployment-d5667d9c7-b4z5b 6b389e4e-db9f-489a-924f-6c81123be1ca 5302500 0 2020-05-17 01:09:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 6563886b-11df-499c-90bd-ad8c0b2d5f83 0xc003a6b540 0xc003a6b541}] [] [{kube-controller-manager Update v1 2020-05-17 01:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6563886b-11df-499c-90bd-ad8c0b2d5f83\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-17 01:09:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b882p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b882p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b882p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 01:09:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 01:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 01:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-17 01:09:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-17 01:09:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:09:20.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-451" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":278,"skipped":4581,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:09:20.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:09:26.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5248" for this suite. • [SLOW TEST:6.151 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:09:26.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 01:09:27.216: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 17 01:09:29.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274567, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274567, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274567, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274567, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 01:09:32.284: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:09:32.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8694" for this suite. STEP: Destroying namespace "webhook-8694-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.071 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":280,"skipped":4656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:09:32.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:09:48.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4116" for this suite. • [SLOW TEST:16.225 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":281,"skipped":4684,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:09:48.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:09:52.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6609" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4691,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:09:52.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-d1b47cb9-0108-4931-9f88-366765df6e6c STEP: Creating a pod to test consume secrets May 17 01:09:52.988: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5db9d549-cf57-4d26-bc0d-3df4d5cceb8c" in namespace "projected-1493" to be "Succeeded or Failed" May 17 01:09:53.054: INFO: Pod "pod-projected-secrets-5db9d549-cf57-4d26-bc0d-3df4d5cceb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 66.090309ms May 17 01:09:55.058: INFO: Pod "pod-projected-secrets-5db9d549-cf57-4d26-bc0d-3df4d5cceb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070167689s May 17 01:09:57.063: INFO: Pod "pod-projected-secrets-5db9d549-cf57-4d26-bc0d-3df4d5cceb8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075002646s STEP: Saw pod success May 17 01:09:57.063: INFO: Pod "pod-projected-secrets-5db9d549-cf57-4d26-bc0d-3df4d5cceb8c" satisfied condition "Succeeded or Failed" May 17 01:09:57.066: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5db9d549-cf57-4d26-bc0d-3df4d5cceb8c container projected-secret-volume-test: STEP: delete the pod May 17 01:09:57.099: INFO: Waiting for pod pod-projected-secrets-5db9d549-cf57-4d26-bc0d-3df4d5cceb8c to disappear May 17 01:09:57.112: INFO: Pod pod-projected-secrets-5db9d549-cf57-4d26-bc0d-3df4d5cceb8c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:09:57.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1493" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":283,"skipped":4713,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:09:57.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 17 01:09:57.311: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e5b7c64-b65a-49a4-b02d-62fc2ce91ef0" in namespace "downward-api-5679" to be "Succeeded or Failed" May 17 01:09:57.317: INFO: Pod "downwardapi-volume-4e5b7c64-b65a-49a4-b02d-62fc2ce91ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.334364ms May 17 01:09:59.327: INFO: Pod "downwardapi-volume-4e5b7c64-b65a-49a4-b02d-62fc2ce91ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015690192s May 17 01:10:01.332: INFO: Pod "downwardapi-volume-4e5b7c64-b65a-49a4-b02d-62fc2ce91ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020829138s STEP: Saw pod success May 17 01:10:01.332: INFO: Pod "downwardapi-volume-4e5b7c64-b65a-49a4-b02d-62fc2ce91ef0" satisfied condition "Succeeded or Failed" May 17 01:10:01.335: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4e5b7c64-b65a-49a4-b02d-62fc2ce91ef0 container client-container: STEP: delete the pod May 17 01:10:01.379: INFO: Waiting for pod downwardapi-volume-4e5b7c64-b65a-49a4-b02d-62fc2ce91ef0 to disappear May 17 01:10:01.416: INFO: Pod downwardapi-volume-4e5b7c64-b65a-49a4-b02d-62fc2ce91ef0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:10:01.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5679" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4746,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:10:01.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-815 May 17 01:10:03.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-815 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 17 01:10:03.906: INFO: stderr: "I0517 01:10:03.818008 4379 log.go:172] (0xc000985760) (0xc0009c80a0) Create stream\nI0517 01:10:03.818070 4379 log.go:172] (0xc000985760) (0xc0009c80a0) Stream added, broadcasting: 1\nI0517 01:10:03.823299 4379 log.go:172] (0xc000985760) Reply frame received for 1\nI0517 01:10:03.823349 4379 log.go:172] (0xc000985760) (0xc000839e00) Create stream\nI0517 01:10:03.823368 4379 log.go:172] (0xc000985760) (0xc000839e00) Stream added, broadcasting: 3\nI0517 01:10:03.824466 4379 log.go:172] (0xc000985760) Reply frame received for 3\nI0517 01:10:03.824524 4379 log.go:172] (0xc000985760) (0xc0006f0be0) Create stream\nI0517 01:10:03.824545 4379 log.go:172] (0xc000985760) (0xc0006f0be0) Stream added, broadcasting: 5\nI0517 01:10:03.825806 4379 log.go:172] (0xc000985760) Reply frame received for 5\nI0517 01:10:03.893657 4379 log.go:172] (0xc000985760) Data frame received for 5\nI0517 01:10:03.893686 4379 log.go:172] (0xc0006f0be0) (5) Data frame handling\nI0517 01:10:03.893707 4379 log.go:172] (0xc0006f0be0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0517 01:10:03.899002 4379 log.go:172] (0xc000985760) Data frame received for 3\nI0517 01:10:03.899029 4379 log.go:172] (0xc000839e00) (3) Data frame handling\nI0517 01:10:03.899055 4379 log.go:172] (0xc000839e00) (3) Data frame sent\nI0517 01:10:03.899524 4379 log.go:172] (0xc000985760) Data frame received for 3\nI0517 01:10:03.899552 4379 log.go:172] (0xc000839e00) (3) Data frame handling\nI0517 01:10:03.899572 4379 log.go:172] (0xc000985760) Data frame received for 5\nI0517 01:10:03.899591 4379 log.go:172] (0xc0006f0be0) (5) Data frame handling\nI0517 01:10:03.901809 4379 log.go:172] (0xc000985760) Data frame received for 1\nI0517 01:10:03.901837 4379 log.go:172] (0xc0009c80a0) (1) Data frame handling\nI0517 01:10:03.901851 4379 log.go:172] (0xc0009c80a0) (1) Data frame sent\nI0517 01:10:03.901868 4379 log.go:172] (0xc000985760) (0xc0009c80a0) Stream removed, broadcasting: 1\nI0517 01:10:03.901882 4379 log.go:172] (0xc000985760) Go away received\nI0517 01:10:03.902304 4379 log.go:172] (0xc000985760) (0xc0009c80a0) Stream removed, broadcasting: 1\nI0517 01:10:03.902325 4379 log.go:172] (0xc000985760) (0xc000839e00) Stream removed, broadcasting: 3\nI0517 01:10:03.902336 4379 log.go:172] (0xc000985760) (0xc0006f0be0) Stream removed, broadcasting: 5\n" May 17 01:10:03.906: INFO: stdout: "iptables" May 17 01:10:03.907: INFO: proxyMode: iptables May 17 01:10:03.912: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 17 01:10:03.941: INFO: Pod kube-proxy-mode-detector still exists May 17 01:10:05.941: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 17 01:10:05.997: INFO: Pod kube-proxy-mode-detector still exists May 17 01:10:07.941: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 17 01:10:07.955: INFO: Pod kube-proxy-mode-detector still exists May 17 01:10:09.941: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 17 01:10:09.944: INFO: Pod kube-proxy-mode-detector still exists May 17 01:10:11.941: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 17 01:10:11.967: INFO: Pod kube-proxy-mode-detector still exists May 17 01:10:13.941: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 17 01:10:13.945: INFO: Pod kube-proxy-mode-detector still exists May 17 01:10:15.941: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 17 01:10:15.955: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-815 STEP: creating replication controller affinity-clusterip-timeout in namespace services-815 I0517 01:10:16.028566 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-815, replica count: 3 I0517 01:10:19.078978 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 01:10:22.079230 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 01:10:22.085: INFO: Creating new exec pod May 17 01:10:27.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-815 execpod-affinitytpwkm -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 17 01:10:27.358: INFO: stderr: "I0517 01:10:27.248648 4399 log.go:172] (0xc0000e8370) (0xc000507360) Create stream\nI0517 01:10:27.248701 4399 log.go:172] (0xc0000e8370) (0xc000507360) Stream added, broadcasting: 1\nI0517 01:10:27.250384 4399 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0517 01:10:27.250442 4399 log.go:172] (0xc0000e8370) (0xc000441720) Create stream\nI0517 01:10:27.250468 4399 log.go:172] (0xc0000e8370) (0xc000441720) Stream added, broadcasting: 3\nI0517 01:10:27.251435 4399 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0517 01:10:27.251495 4399 log.go:172] (0xc0000e8370) (0xc0000f3040) Create stream\nI0517 01:10:27.251519 4399 log.go:172] (0xc0000e8370) (0xc0000f3040) Stream added, broadcasting: 5\nI0517 01:10:27.252390 4399 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0517 01:10:27.351438 4399 log.go:172] (0xc0000e8370) Data frame received for 5\nI0517 01:10:27.351482 4399 log.go:172] (0xc0000f3040) (5) Data frame handling\nI0517 01:10:27.351503 4399 log.go:172] (0xc0000f3040) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0517 01:10:27.352062 4399 log.go:172] (0xc0000e8370) Data frame received for 5\nI0517 01:10:27.352175 4399 log.go:172] (0xc0000f3040) (5) Data frame handling\nI0517 01:10:27.352216 4399 log.go:172] (0xc0000f3040) (5) Data frame sent\nI0517 01:10:27.352235 4399 log.go:172] (0xc0000e8370) Data frame received for 3\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0517 01:10:27.352265 4399 log.go:172] (0xc000441720) (3) Data frame handling\nI0517 01:10:27.352283 4399 log.go:172] (0xc0000e8370) Data frame received for 5\nI0517 01:10:27.352292 4399 log.go:172] (0xc0000f3040) (5) Data frame handling\nI0517 01:10:27.354157 4399 log.go:172] (0xc0000e8370) Data frame received for 1\nI0517 01:10:27.354173 4399 log.go:172] (0xc000507360) (1) Data frame handling\nI0517 01:10:27.354185 4399 log.go:172] (0xc000507360) (1) Data frame sent\nI0517 01:10:27.354199 4399 log.go:172] (0xc0000e8370) (0xc000507360) Stream removed, broadcasting: 1\nI0517 01:10:27.354214 4399 log.go:172] (0xc0000e8370) Go away received\nI0517 01:10:27.354527 4399 log.go:172] (0xc0000e8370) (0xc000507360) Stream removed, broadcasting: 1\nI0517 01:10:27.354549 4399 log.go:172] (0xc0000e8370) (0xc000441720) Stream removed, broadcasting: 3\nI0517 01:10:27.354557 4399 log.go:172] (0xc0000e8370) (0xc0000f3040) Stream removed, broadcasting: 5\n" May 17 01:10:27.358: INFO: stdout: "" May 17 01:10:27.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-815 execpod-affinitytpwkm -- /bin/sh -x -c nc -zv -t -w 2 10.104.188.136 80' May 17 01:10:27.546: INFO: stderr: "I0517 01:10:27.482991 4419 log.go:172] (0xc00097fad0) (0xc0006e0640) Create stream\nI0517 01:10:27.483051 4419 log.go:172] (0xc00097fad0) (0xc0006e0640) Stream added, broadcasting: 1\nI0517 01:10:27.484824 4419 log.go:172] (0xc00097fad0) Reply frame received for 1\nI0517 01:10:27.484863 4419 log.go:172] (0xc00097fad0) (0xc0006f6f00) Create stream\nI0517 01:10:27.484876 4419 log.go:172] (0xc00097fad0) (0xc0006f6f00) Stream added, broadcasting: 3\nI0517 01:10:27.486272 4419 log.go:172] (0xc00097fad0) Reply frame received for 3\nI0517 01:10:27.486319 4419 log.go:172] (0xc00097fad0) (0xc0005c21e0) Create stream\nI0517 01:10:27.486331 4419 log.go:172] (0xc00097fad0) (0xc0005c21e0) Stream added, broadcasting: 5\nI0517 01:10:27.486990 4419 log.go:172] (0xc00097fad0) Reply frame received for 5\nI0517 01:10:27.540207 4419 log.go:172] (0xc00097fad0) Data frame received for 5\nI0517 01:10:27.540233 4419 log.go:172] (0xc0005c21e0) (5) Data frame handling\nI0517 01:10:27.540248 4419 log.go:172] (0xc0005c21e0) (5) Data frame sent\nI0517 01:10:27.540257 4419 log.go:172] (0xc00097fad0) Data frame received for 5\nI0517 01:10:27.540265 4419 log.go:172] (0xc0005c21e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.188.136 80\nConnection to 10.104.188.136 80 port [tcp/http] succeeded!\nI0517 01:10:27.540718 4419 log.go:172] (0xc00097fad0) Data frame received for 3\nI0517 01:10:27.540750 4419 log.go:172] (0xc0006f6f00) (3) Data frame handling\nI0517 01:10:27.541806 4419 log.go:172] (0xc00097fad0) Data frame received for 1\nI0517 01:10:27.541835 4419 log.go:172] (0xc0006e0640) (1) Data frame handling\nI0517 01:10:27.541850 4419 log.go:172] (0xc0006e0640) (1) Data frame sent\nI0517 01:10:27.541918 4419 log.go:172] (0xc00097fad0) (0xc0006e0640) Stream removed, broadcasting: 1\nI0517 01:10:27.542070 4419 log.go:172] (0xc00097fad0) Go away received\nI0517 01:10:27.542217 4419 log.go:172] (0xc00097fad0) (0xc0006e0640) Stream removed, broadcasting: 1\nI0517 01:10:27.542231 4419 log.go:172] (0xc00097fad0) (0xc0006f6f00) Stream removed, broadcasting: 3\nI0517 01:10:27.542239 4419 log.go:172] (0xc00097fad0) (0xc0005c21e0) Stream removed, broadcasting: 5\n" May 17 01:10:27.546: INFO: stdout: "" May 17 01:10:27.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-815 execpod-affinitytpwkm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.188.136:80/ ; done' May 17 01:10:27.861: INFO: stderr: "I0517 01:10:27.681321 4439 log.go:172] (0xc0006c0210) (0xc000474dc0) Create stream\nI0517 01:10:27.681388 4439 log.go:172] (0xc0006c0210) (0xc000474dc0) Stream added, broadcasting: 1\nI0517 01:10:27.683971 4439 log.go:172] (0xc0006c0210) Reply frame received for 1\nI0517 01:10:27.684037 4439 log.go:172] (0xc0006c0210) (0xc0000dd180) Create stream\nI0517 01:10:27.684061 4439 log.go:172] (0xc0006c0210) (0xc0000dd180) Stream added, broadcasting: 3\nI0517 01:10:27.685341 4439 log.go:172] (0xc0006c0210) Reply frame received for 3\nI0517 01:10:27.685421 4439 log.go:172] (0xc0006c0210) (0xc0003ac640) Create stream\nI0517 01:10:27.685451 4439 log.go:172] (0xc0006c0210) (0xc0003ac640) Stream added, broadcasting: 5\nI0517 01:10:27.686706 4439 log.go:172] (0xc0006c0210) Reply frame received for 5\nI0517 01:10:27.761594 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.761627 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.761650 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.761845 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.761879 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.761912 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.766358 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.766381 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.766398 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.766967 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.767025 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.767044 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.767062 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.767083 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.767095 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.775254 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.775272 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.775280 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.775675 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.775701 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.775725 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.775815 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.775828 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.775840 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.782091 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.782111 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.782130 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.782693 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.782721 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.782730 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\nI0517 01:10:27.782736 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.782742 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.782759 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\nI0517 01:10:27.782771 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.782777 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.782784 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.786840 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.786855 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.786864 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.787226 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.787243 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.787255 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\nI0517 01:10:27.787269 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.787279 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.787289 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.787298 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.787306 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.787368 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\nI0517 01:10:27.794037 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.794073 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.794097 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.794737 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.794758 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.794783 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.794796 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.794804 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.794831 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.798631 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.798667 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.798701 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.799185 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.799200 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.799208 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.799336 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.799359 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.799387 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.805729 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.805751 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.805764 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.806300 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.806318 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.806328 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.806349 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.806367 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.806396 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.809907 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.809926 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.809951 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.810737 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.810759 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.810772 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.810801 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.810825 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.810848 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\nI0517 01:10:27.810866 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.810881 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.810915 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\nI0517 01:10:27.815991 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.816009 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.816023 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.816617 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.816645 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.816656 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.816670 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.816678 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.816687 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.822409 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.822454 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.822477 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.822505 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.822523 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.822534 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.822599 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.822623 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.822654 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.826548 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.826564 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.826579 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.826972 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.826997 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.827009 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.827026 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.827040 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.827057 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.832370 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.832407 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.832450 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.832791 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.832813 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.832827 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.832851 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.832863 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.832879 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\nI0517 01:10:27.832891 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.832903 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.832922 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\nI0517 01:10:27.837799 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.837824 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.837844 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.839102 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.839125 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.839141 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0517 01:10:27.839307 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.839337 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.839349 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.839366 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.839378 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.839388 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n http://10.104.188.136:80/\nI0517 01:10:27.843834 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.843859 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.843889 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.844355 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.844376 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.844416 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.844433 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.844445 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.844462 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.847710 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.847733 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.847752 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.848411 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.848424 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.848435 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.848454 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.848479 4439 log.go:172] (0xc0003ac640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:27.848494 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.853918 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.853941 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.853955 4439 log.go:172] (0xc0000dd180) (3) Data frame sent\nI0517 01:10:27.854534 4439 log.go:172] (0xc0006c0210) Data frame received for 5\nI0517 01:10:27.854549 4439 log.go:172] (0xc0003ac640) (5) Data frame handling\nI0517 01:10:27.854804 4439 log.go:172] (0xc0006c0210) Data frame received for 3\nI0517 01:10:27.854824 4439 log.go:172] (0xc0000dd180) (3) Data frame handling\nI0517 01:10:27.856494 4439 log.go:172] (0xc0006c0210) Data frame received for 1\nI0517 01:10:27.856514 4439 log.go:172] (0xc000474dc0) (1) Data frame handling\nI0517 01:10:27.856529 4439 log.go:172] (0xc000474dc0) (1) Data frame sent\nI0517 01:10:27.856713 4439 log.go:172] (0xc0006c0210) (0xc000474dc0) Stream removed, broadcasting: 1\nI0517 01:10:27.856826 4439 log.go:172] (0xc0006c0210) Go away received\nI0517 01:10:27.856964 4439 log.go:172] (0xc0006c0210) (0xc000474dc0) Stream removed, broadcasting: 1\nI0517 01:10:27.856985 4439 log.go:172] (0xc0006c0210) (0xc0000dd180) Stream removed, broadcasting: 3\nI0517 01:10:27.856992 4439 log.go:172] (0xc0006c0210) (0xc0003ac640) Stream removed, broadcasting: 5\n" May 17 01:10:27.863: INFO: stdout: "\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml\naffinity-clusterip-timeout-ch2ml" May 17 01:10:27.863: INFO: Received response from host: May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Received response from host: affinity-clusterip-timeout-ch2ml May 17 01:10:27.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-815 execpod-affinitytpwkm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.188.136:80/' May 17 01:10:28.066: INFO: stderr: "I0517 01:10:27.992350 4460 log.go:172] (0xc000951080) (0xc000628aa0) Create stream\nI0517 01:10:27.992402 4460 log.go:172] (0xc000951080) (0xc000628aa0) Stream added, broadcasting: 1\nI0517 01:10:27.996691 4460 log.go:172] (0xc000951080) Reply frame received for 1\nI0517 01:10:27.996751 4460 log.go:172] (0xc000951080) (0xc0005c8d20) Create stream\nI0517 01:10:27.996764 4460 log.go:172] (0xc000951080) (0xc0005c8d20) Stream added, broadcasting: 3\nI0517 01:10:27.998125 4460 log.go:172] (0xc000951080) Reply frame received for 3\nI0517 01:10:27.998180 4460 log.go:172] (0xc000951080) (0xc00055e5a0) Create stream\nI0517 01:10:27.998194 4460 log.go:172] (0xc000951080) (0xc00055e5a0) Stream added, broadcasting: 5\nI0517 01:10:27.999097 4460 log.go:172] (0xc000951080) Reply frame received for 5\nI0517 01:10:28.053854 4460 log.go:172] (0xc000951080) Data frame received for 5\nI0517 01:10:28.053894 4460 log.go:172] (0xc00055e5a0) (5) Data frame handling\nI0517 01:10:28.053917 4460 log.go:172] (0xc00055e5a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:28.059772 4460 log.go:172] (0xc000951080) Data frame received for 3\nI0517 01:10:28.059814 4460 log.go:172] (0xc0005c8d20) (3) Data frame handling\nI0517 01:10:28.059848 4460 log.go:172] (0xc0005c8d20) (3) Data frame sent\nI0517 01:10:28.060179 4460 log.go:172] (0xc000951080) Data frame received for 3\nI0517 01:10:28.060231 4460 log.go:172] (0xc0005c8d20) (3) Data frame handling\nI0517 01:10:28.060261 4460 log.go:172] (0xc000951080) Data frame received for 5\nI0517 01:10:28.060285 4460 log.go:172] (0xc00055e5a0) (5) Data frame handling\nI0517 01:10:28.062154 4460 log.go:172] (0xc000951080) Data frame received for 1\nI0517 01:10:28.062231 4460 log.go:172] (0xc000628aa0) (1) Data frame handling\nI0517 01:10:28.062311 4460 log.go:172] (0xc000628aa0) (1) Data frame sent\nI0517 01:10:28.062337 4460 log.go:172] (0xc000951080) (0xc000628aa0) Stream removed, broadcasting: 1\nI0517 01:10:28.062353 4460 log.go:172] (0xc000951080) Go away received\nI0517 01:10:28.062719 4460 log.go:172] (0xc000951080) (0xc000628aa0) Stream removed, broadcasting: 1\nI0517 01:10:28.062735 4460 log.go:172] (0xc000951080) (0xc0005c8d20) Stream removed, broadcasting: 3\nI0517 01:10:28.062741 4460 log.go:172] (0xc000951080) (0xc00055e5a0) Stream removed, broadcasting: 5\n" May 17 01:10:28.066: INFO: stdout: "affinity-clusterip-timeout-ch2ml" May 17 01:10:43.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-815 execpod-affinitytpwkm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.188.136:80/' May 17 01:10:43.323: INFO: stderr: "I0517 01:10:43.201641 4480 log.go:172] (0xc000714fd0) (0xc000a82780) Create stream\nI0517 01:10:43.201694 4480 log.go:172] (0xc000714fd0) (0xc000a82780) Stream added, broadcasting: 1\nI0517 01:10:43.206215 4480 log.go:172] (0xc000714fd0) Reply frame received for 1\nI0517 01:10:43.206298 4480 log.go:172] (0xc000714fd0) (0xc000a82820) Create stream\nI0517 01:10:43.206323 4480 log.go:172] (0xc000714fd0) (0xc000a82820) Stream added, broadcasting: 3\nI0517 01:10:43.207189 4480 log.go:172] (0xc000714fd0) Reply frame received for 3\nI0517 01:10:43.207230 4480 log.go:172] (0xc000714fd0) (0xc000a828c0) Create stream\nI0517 01:10:43.207244 4480 log.go:172] (0xc000714fd0) (0xc000a828c0) Stream added, broadcasting: 5\nI0517 01:10:43.208240 4480 log.go:172] (0xc000714fd0) Reply frame received for 5\nI0517 01:10:43.312997 4480 log.go:172] (0xc000714fd0) Data frame received for 5\nI0517 01:10:43.313031 4480 log.go:172] (0xc000a828c0) (5) Data frame handling\nI0517 01:10:43.313055 4480 log.go:172] (0xc000a828c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:43.316880 4480 log.go:172] (0xc000714fd0) Data frame received for 3\nI0517 01:10:43.316908 4480 log.go:172] (0xc000a82820) (3) Data frame handling\nI0517 01:10:43.316933 4480 log.go:172] (0xc000a82820) (3) Data frame sent\nI0517 01:10:43.317425 4480 log.go:172] (0xc000714fd0) Data frame received for 3\nI0517 01:10:43.317446 4480 log.go:172] (0xc000a82820) (3) Data frame handling\nI0517 01:10:43.317628 4480 log.go:172] (0xc000714fd0) Data frame received for 5\nI0517 01:10:43.317646 4480 log.go:172] (0xc000a828c0) (5) Data frame handling\nI0517 01:10:43.318954 4480 log.go:172] (0xc000714fd0) Data frame received for 1\nI0517 01:10:43.318968 4480 log.go:172] (0xc000a82780) (1) Data frame handling\nI0517 01:10:43.318976 4480 log.go:172] (0xc000a82780) (1) Data frame sent\nI0517 01:10:43.319062 4480 log.go:172] (0xc000714fd0) (0xc000a82780) Stream removed, broadcasting: 1\nI0517 01:10:43.319117 4480 log.go:172] (0xc000714fd0) Go away received\nI0517 01:10:43.319393 4480 log.go:172] (0xc000714fd0) (0xc000a82780) Stream removed, broadcasting: 1\nI0517 01:10:43.319409 4480 log.go:172] (0xc000714fd0) (0xc000a82820) Stream removed, broadcasting: 3\nI0517 01:10:43.319417 4480 log.go:172] (0xc000714fd0) (0xc000a828c0) Stream removed, broadcasting: 5\n" May 17 01:10:43.323: INFO: stdout: "affinity-clusterip-timeout-ch2ml" May 17 01:10:58.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-815 execpod-affinitytpwkm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.188.136:80/' May 17 01:10:58.566: INFO: stderr: "I0517 01:10:58.483744 4500 log.go:172] (0xc000973550) (0xc000822e60) Create stream\nI0517 01:10:58.483800 4500 log.go:172] (0xc000973550) (0xc000822e60) Stream added, broadcasting: 1\nI0517 01:10:58.488645 4500 log.go:172] (0xc000973550) Reply frame received for 1\nI0517 01:10:58.488693 4500 log.go:172] (0xc000973550) (0xc0007414a0) Create stream\nI0517 01:10:58.488705 4500 log.go:172] (0xc000973550) (0xc0007414a0) Stream added, broadcasting: 3\nI0517 01:10:58.490509 4500 log.go:172] (0xc000973550) Reply frame received for 3\nI0517 01:10:58.490559 4500 log.go:172] (0xc000973550) (0xc00072ea00) Create stream\nI0517 01:10:58.490574 4500 log.go:172] (0xc000973550) (0xc00072ea00) Stream added, broadcasting: 5\nI0517 01:10:58.491694 4500 log.go:172] (0xc000973550) Reply frame received for 5\nI0517 01:10:58.556711 4500 log.go:172] (0xc000973550) Data frame received for 5\nI0517 01:10:58.556738 4500 log.go:172] (0xc00072ea00) (5) Data frame handling\nI0517 01:10:58.556752 4500 log.go:172] (0xc00072ea00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.188.136:80/\nI0517 01:10:58.559149 4500 log.go:172] (0xc000973550) Data frame received for 3\nI0517 01:10:58.559176 4500 log.go:172] (0xc0007414a0) (3) Data frame handling\nI0517 01:10:58.559206 4500 log.go:172] (0xc0007414a0) (3) Data frame sent\nI0517 01:10:58.559692 4500 log.go:172] (0xc000973550) Data frame received for 5\nI0517 01:10:58.559716 4500 log.go:172] (0xc00072ea00) (5) Data frame handling\nI0517 01:10:58.559751 4500 log.go:172] (0xc000973550) Data frame received for 3\nI0517 01:10:58.559778 4500 log.go:172] (0xc0007414a0) (3) Data frame handling\nI0517 01:10:58.561585 4500 log.go:172] (0xc000973550) Data frame received for 1\nI0517 01:10:58.561628 4500 log.go:172] (0xc000822e60) (1) Data frame handling\nI0517 01:10:58.561674 4500 log.go:172] (0xc000822e60) (1) Data frame sent\nI0517 01:10:58.561708 4500 log.go:172] (0xc000973550) (0xc000822e60) Stream removed, broadcasting: 1\nI0517 01:10:58.561737 4500 log.go:172] (0xc000973550) Go away received\nI0517 01:10:58.562072 4500 log.go:172] (0xc000973550) (0xc000822e60) Stream removed, broadcasting: 1\nI0517 01:10:58.562096 4500 log.go:172] (0xc000973550) (0xc0007414a0) Stream removed, broadcasting: 3\nI0517 01:10:58.562105 4500 log.go:172] (0xc000973550) (0xc00072ea00) Stream removed, broadcasting: 5\n" May 17 01:10:58.566: INFO: stdout: "affinity-clusterip-timeout-h5497" May 17 01:10:58.566: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-815, will wait for the garbage collector to delete the pods May 17 01:10:58.706: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 44.584036ms May 17 01:10:59.207: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.2884ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:11:14.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-815" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:73.575 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":285,"skipped":4762,"failed":0} S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:11:14.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-5943d0cf-8cae-4d6a-b81f-43ba499a3e24 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:11:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4572" for this suite. • [SLOW TEST:6.134 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":286,"skipped":4763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:11:21.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-fb5d5aa8-4a67-469b-b6ce-eb6487968c17 in namespace container-probe-4935 May 17 01:11:25.343: INFO: Started pod liveness-fb5d5aa8-4a67-469b-b6ce-eb6487968c17 in namespace container-probe-4935 STEP: checking the pod's current state and verifying that restartCount is present May 17 01:11:25.346: INFO: Initial restart count of pod liveness-fb5d5aa8-4a67-469b-b6ce-eb6487968c17 is 0 May 17 01:11:37.373: INFO: Restart count of pod container-probe-4935/liveness-fb5d5aa8-4a67-469b-b6ce-eb6487968c17 is now 1 (12.026459014s elapsed) May 17 01:11:57.416: INFO: Restart count of pod container-probe-4935/liveness-fb5d5aa8-4a67-469b-b6ce-eb6487968c17 is now 2 (32.069265668s elapsed) May 17 01:12:17.465: INFO: Restart count of pod container-probe-4935/liveness-fb5d5aa8-4a67-469b-b6ce-eb6487968c17 is now 3 (52.119071809s elapsed) May 17 01:12:37.506: INFO: Restart count of pod container-probe-4935/liveness-fb5d5aa8-4a67-469b-b6ce-eb6487968c17 is now 4 (1m12.15939225s elapsed) May 17 01:13:37.634: INFO: Restart count of pod container-probe-4935/liveness-fb5d5aa8-4a67-469b-b6ce-eb6487968c17 is now 5 (2m12.287897154s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:13:37.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4935" for this suite. • [SLOW TEST:136.569 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":287,"skipped":4804,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 17 01:13:37.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 17 01:13:38.451: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 17 01:13:40.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274818, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274818, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274818, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274818, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 01:13:42.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274818, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274818, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274818, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725274818, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 17 01:13:45.551: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 17 01:13:45.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3850" for this suite. STEP: Destroying namespace "webhook-3850-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.980 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":288,"skipped":4807,"failed":0} May 17 01:13:45.682: INFO: Running AfterSuite actions on all nodes May 17 01:13:45.682: INFO: Running AfterSuite actions on node 1 May 17 01:13:45.682: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5733.301 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS