I0511 18:41:33.175224 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0511 18:41:33.175436 7 e2e.go:129] Starting e2e run "0975a546-d021-477c-a431-9f79f69be5de" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589222492 - Will randomize all specs Will run 288 of 5095 specs May 11 18:41:33.231: INFO: >>> kubeConfig: /root/.kube/config May 11 18:41:33.236: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 11 18:41:33.504: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 11 18:41:33.741: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 11 18:41:33.741: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 11 18:41:33.741: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 11 18:41:33.782: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 11 18:41:33.782: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 11 18:41:33.782: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 11 18:41:33.783: INFO: kube-apiserver version: v1.18.2 May 11 18:41:33.783: INFO: >>> kubeConfig: /root/.kube/config May 11 18:41:33.808: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:41:33.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook May 11 18:41:34.930: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 18:41:37.915: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 18:41:40.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819299, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:41:42.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819299, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:41:44.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819299, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:41:46.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819299, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:41:48.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819299, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819297, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 18:41:51.713: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:41:54.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7605" for this suite. STEP: Destroying namespace "webhook-7605-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.612 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":1,"skipped":4,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:41:55.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 11 18:42:02.937: INFO: Successfully updated pod "labelsupdatec544df20-35a6-4a2b-9f37-44d0f7f61b47" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:42:05.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7572" for this suite. • [SLOW TEST:9.708 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":9,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:42:05.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 18:42:06.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9" in namespace "projected-9975" to be "Succeeded or Failed" May 11 18:42:06.354: INFO: Pod "downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9": Phase="Pending", Reason="", readiness=false. Elapsed: 314.721413ms May 11 18:42:08.792: INFO: Pod "downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752399409s May 11 18:42:10.828: INFO: Pod "downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.788224954s May 11 18:42:13.136: INFO: Pod "downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.096146318s May 11 18:42:15.534: INFO: Pod "downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.49487797s STEP: Saw pod success May 11 18:42:15.534: INFO: Pod "downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9" satisfied condition "Succeeded or Failed" May 11 18:42:15.537: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9 container client-container: STEP: delete the pod May 11 18:42:15.846: INFO: Waiting for pod downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9 to disappear May 11 18:42:16.102: INFO: Pod downwardapi-volume-bc0e3e6a-423d-4aeb-a8f5-cab55277ebc9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:42:16.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9975" for this suite. • [SLOW TEST:11.111 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":3,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:42:16.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 11 18:42:25.649: INFO: Successfully updated pod "labelsupdate6a01a0ca-74d5-4824-b6c2-911f5d87acee" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:42:26.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-445" for this suite. • [SLOW TEST:10.725 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:42:26.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:42:31.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-791" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":5,"skipped":65,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:42:31.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 18:42:31.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7067' May 11 18:42:44.817: INFO: stderr: "" May 11 18:42:44.817: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 11 18:42:45.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7067' May 11 18:42:51.106: INFO: stderr: "" May 11 18:42:51.106: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:42:51.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7067" for this suite. • [SLOW TEST:19.888 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":6,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:42:51.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:42:58.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5340" for this suite. • [SLOW TEST:7.591 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:42:58.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-50832507-b73d-4c09-90f3-1ab78e227781 STEP: Creating a pod to test consume configMaps May 11 18:43:01.326: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c" in namespace "projected-771" to be "Succeeded or Failed" May 11 18:43:01.517: INFO: Pod "pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c": Phase="Pending", Reason="", readiness=false. Elapsed: 191.355886ms May 11 18:43:03.598: INFO: Pod "pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271587317s May 11 18:43:05.601: INFO: Pod "pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27550288s May 11 18:43:07.732: INFO: Pod "pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406517971s May 11 18:43:09.782: INFO: Pod "pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.455723824s STEP: Saw pod success May 11 18:43:09.782: INFO: Pod "pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c" satisfied condition "Succeeded or Failed" May 11 18:43:09.785: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c container projected-configmap-volume-test: STEP: delete the pod May 11 18:43:10.010: INFO: Waiting for pod pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c to disappear May 11 18:43:10.302: INFO: Pod pod-projected-configmaps-e9adc2f2-676a-4977-90ed-bbf248e9676c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:43:10.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-771" for this suite. • [SLOW TEST:12.039 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:43:10.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 11 18:43:11.337: INFO: Pod name pod-release: Found 0 pods out of 1 May 11 18:43:16.720: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:43:17.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8224" for this suite. • [SLOW TEST:9.070 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":9,"skipped":143,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:43:19.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 18:43:49.898: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:43:50.040: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:43:52.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:43:52.237: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:43:54.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:43:54.044: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:43:56.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:43:56.044: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:43:56.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3409" for this suite. • [SLOW TEST:36.232 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":154,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:43:56.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 18:43:57.493: INFO: Waiting up to 5m0s for pod "downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe" in namespace "downward-api-4603" to be "Succeeded or Failed" May 11 18:43:57.496: INFO: Pod "downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223747ms May 11 18:43:59.560: INFO: Pod "downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066580966s May 11 18:44:01.943: INFO: Pod "downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449336162s May 11 18:44:03.972: INFO: Pod "downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478956249s May 11 18:44:06.478: INFO: Pod "downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe": Phase="Running", Reason="", readiness=true. Elapsed: 8.98464847s May 11 18:44:08.530: INFO: Pod "downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.036573028s STEP: Saw pod success May 11 18:44:08.530: INFO: Pod "downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe" satisfied condition "Succeeded or Failed" May 11 18:44:08.555: INFO: Trying to get logs from node latest-worker2 pod downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe container dapi-container: STEP: delete the pod May 11 18:44:08.883: INFO: Waiting for pod downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe to disappear May 11 18:44:08.944: INFO: Pod downward-api-981c222c-ec1b-42fc-9266-5b0e4adf6bbe no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:44:08.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4603" for this suite. • [SLOW TEST:12.952 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":11,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:44:09.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 18:44:11.051: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 18:44:14.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:44:16.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:44:18.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819451, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819450, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 18:44:21.190: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:44:21.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3676" for this suite. STEP: Destroying namespace "webhook-3676-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.742 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":12,"skipped":187,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:44:21.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-fc8f4c83-26ea-4985-9eb9-b087141b1361 STEP: Creating a pod to test consume configMaps May 11 18:44:23.649: INFO: Waiting up to 5m0s for pod "pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e" in namespace "configmap-2224" to be "Succeeded or Failed" May 11 18:44:24.064: INFO: Pod "pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e": Phase="Pending", Reason="", readiness=false. Elapsed: 414.493865ms May 11 18:44:26.787: INFO: Pod "pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.137765848s May 11 18:44:29.542: INFO: Pod "pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.892841896s May 11 18:44:31.860: INFO: Pod "pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.210883897s STEP: Saw pod success May 11 18:44:31.860: INFO: Pod "pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e" satisfied condition "Succeeded or Failed" May 11 18:44:31.863: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e container configmap-volume-test: STEP: delete the pod May 11 18:44:33.069: INFO: Waiting for pod pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e to disappear May 11 18:44:33.071: INFO: Pod pod-configmaps-387f5fc1-29d9-4c07-acce-e5a40c3dac7e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:44:33.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2224" for this suite. • [SLOW TEST:11.327 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":197,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:44:33.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1552/configmap-test-3dc85f2f-461b-4d1f-b144-7f27f4bfc3a2 STEP: Creating a pod to test consume configMaps May 11 18:44:35.116: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df" in namespace "configmap-1552" to be "Succeeded or Failed" May 11 18:44:35.120: INFO: Pod "pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.235538ms May 11 18:44:37.308: INFO: Pod "pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191115379s May 11 18:44:39.757: INFO: Pod "pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.641038368s May 11 18:44:42.017: INFO: Pod "pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.900951032s May 11 18:44:44.251: INFO: Pod "pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.135073149s STEP: Saw pod success May 11 18:44:44.252: INFO: Pod "pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df" satisfied condition "Succeeded or Failed" May 11 18:44:44.458: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df container env-test: STEP: delete the pod May 11 18:44:44.936: INFO: Waiting for pod pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df to disappear May 11 18:44:45.068: INFO: Pod pod-configmaps-d8d943fc-ab55-4590-862a-dbb89cbc64df no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:44:45.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1552" for this suite. • [SLOW TEST:12.315 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":204,"failed":0} SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:44:45.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-4958 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4958 to expose endpoints map[] May 11 18:44:47.033: INFO: successfully validated that service multi-endpoint-test in namespace services-4958 exposes endpoints map[] (1.156411774s elapsed) STEP: Creating pod pod1 in namespace services-4958 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4958 to expose endpoints map[pod1:[100]] May 11 18:44:53.884: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (6.653483638s elapsed, will retry) May 11 18:44:54.924: INFO: successfully validated that service multi-endpoint-test in namespace services-4958 exposes endpoints map[pod1:[100]] (7.693421718s elapsed) STEP: Creating pod pod2 in namespace services-4958 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4958 to expose endpoints map[pod1:[100] pod2:[101]] May 11 18:45:01.035: INFO: Unexpected endpoints: found map[f60a275f-d112-4a76-8757-f77df6719ebb:[100]], expected map[pod1:[100] pod2:[101]] (5.59707609s elapsed, will retry) May 11 18:45:03.379: INFO: successfully validated that service multi-endpoint-test in namespace services-4958 exposes endpoints map[pod1:[100] pod2:[101]] (7.941205165s elapsed) STEP: Deleting pod pod1 in namespace services-4958 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4958 to expose endpoints map[pod2:[101]] May 11 18:45:05.455: INFO: successfully validated that service multi-endpoint-test in namespace services-4958 exposes endpoints map[pod2:[101]] (2.072575226s elapsed) STEP: Deleting pod pod2 in namespace services-4958 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4958 to expose endpoints map[] May 11 18:45:05.990: INFO: successfully validated that service multi-endpoint-test in namespace services-4958 exposes endpoints map[] (192.726639ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:45:06.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4958" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:22.053 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":15,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:45:07.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 18:45:17.028: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:45:17.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1140" for this suite. • [SLOW TEST:10.799 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":16,"skipped":232,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:45:18.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0511 18:46:01.197330 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 18:46:01.197: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:46:01.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1547" for this suite. • [SLOW TEST:42.956 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":17,"skipped":248,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:46:01.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 11 18:46:10.440: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:46:10.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2354" for this suite. • [SLOW TEST:10.207 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":18,"skipped":255,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:46:11.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 18:46:12.257: INFO: Waiting up to 5m0s for pod "pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5" in namespace "emptydir-5373" to be "Succeeded or Failed" May 11 18:46:12.564: INFO: Pod "pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5": Phase="Pending", Reason="", readiness=false. Elapsed: 307.270772ms May 11 18:46:15.058: INFO: Pod "pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.801347707s May 11 18:46:17.111: INFO: Pod "pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.854314984s May 11 18:46:19.435: INFO: Pod "pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.17813035s May 11 18:46:21.680: INFO: Pod "pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.42319286s STEP: Saw pod success May 11 18:46:21.680: INFO: Pod "pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5" satisfied condition "Succeeded or Failed" May 11 18:46:21.843: INFO: Trying to get logs from node latest-worker2 pod pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5 container test-container: STEP: delete the pod May 11 18:46:22.171: INFO: Waiting for pod pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5 to disappear May 11 18:46:22.388: INFO: Pod pod-aa9c6f1e-82a2-42ac-a271-5761e4e200f5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:46:22.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5373" for this suite. • [SLOW TEST:11.137 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":19,"skipped":288,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:46:22.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:46:23.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-795" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":20,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:46:23.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7231.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7231.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 18:46:39.603: INFO: DNS probes using dns-test-1f7b480d-b149-4351-bf5d-225aa477eaa9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7231.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7231.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 18:46:55.250: INFO: File wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local from pod dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 18:46:55.253: INFO: File jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local from pod dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 18:46:55.253: INFO: Lookups using dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 failed for: [wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local] May 11 18:47:00.641: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local from pod dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4: Get https://172.30.12.66:32773/api/v1/namespaces/dns-7231/pods/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4/proxy/results/wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local: stream error: stream ID 915; INTERNAL_ERROR May 11 18:47:00.651: INFO: File jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local from pod dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 contains '' instead of 'bar.example.com.' May 11 18:47:00.652: INFO: Lookups using dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 failed for: [wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local] May 11 18:47:05.435: INFO: File wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local from pod dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 18:47:05.439: INFO: Lookups using dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 failed for: [wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local] May 11 18:47:10.439: INFO: File jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local from pod dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 contains '' instead of 'bar.example.com.' May 11 18:47:10.439: INFO: Lookups using dns-7231/dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 failed for: [jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local] May 11 18:47:15.368: INFO: DNS probes using dns-test-3d7c94ff-ccfa-4990-aa0a-e31bd61175d4 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7231.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7231.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7231.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7231.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 18:47:32.101: INFO: DNS probes using dns-test-6488cd88-f645-4039-9e2e-cd96163ad65a succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:47:33.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7231" for this suite. • [SLOW TEST:69.414 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":21,"skipped":337,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:47:33.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 18:47:33.499: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 18:47:35.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-795 create -f -' May 11 18:47:49.390: INFO: stderr: "" May 11 18:47:49.390: INFO: stdout: "e2e-test-crd-publish-openapi-3767-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 11 18:47:49.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-795 delete e2e-test-crd-publish-openapi-3767-crds test-cr' May 11 18:47:49.839: INFO: stderr: "" May 11 18:47:49.839: INFO: stdout: "e2e-test-crd-publish-openapi-3767-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 11 18:47:49.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-795 apply -f -' May 11 18:47:50.482: INFO: stderr: "" May 11 18:47:50.482: INFO: stdout: "e2e-test-crd-publish-openapi-3767-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 11 18:47:50.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-795 delete e2e-test-crd-publish-openapi-3767-crds test-cr' May 11 18:47:50.692: INFO: stderr: "" May 11 18:47:50.692: INFO: stdout: "e2e-test-crd-publish-openapi-3767-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 11 18:47:50.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3767-crds' May 11 18:47:50.995: INFO: stderr: "" May 11 18:47:50.995: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3767-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:47:52.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-795" for this suite. • [SLOW TEST:19.898 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":22,"skipped":346,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:47:53.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 18:47:54.823: INFO: Create a RollingUpdate DaemonSet May 11 18:47:54.826: INFO: Check that daemon pods launch on every node of the cluster May 11 18:47:54.950: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:47:54.952: INFO: Number of nodes with available pods: 0 May 11 18:47:54.952: INFO: Node latest-worker is running more than one daemon pod May 11 18:47:56.219: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:47:56.483: INFO: Number of nodes with available pods: 0 May 11 18:47:56.483: INFO: Node latest-worker is running more than one daemon pod May 11 18:47:57.219: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:47:57.530: INFO: Number of nodes with available pods: 0 May 11 18:47:57.530: INFO: Node latest-worker is running more than one daemon pod May 11 18:47:58.477: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:47:58.870: INFO: Number of nodes with available pods: 0 May 11 18:47:58.870: INFO: Node latest-worker is running more than one daemon pod May 11 18:47:59.467: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:47:59.470: INFO: Number of nodes with available pods: 0 May 11 18:47:59.470: INFO: Node latest-worker is running more than one daemon pod May 11 18:48:00.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:48:01.238: INFO: Number of nodes with available pods: 0 May 11 18:48:01.238: INFO: Node latest-worker is running more than one daemon pod May 11 18:48:01.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:48:01.990: INFO: Number of nodes with available pods: 0 May 11 18:48:01.990: INFO: Node latest-worker is running more than one daemon pod May 11 18:48:04.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:48:04.959: INFO: Number of nodes with available pods: 2 May 11 18:48:04.959: INFO: Number of running nodes: 2, number of available pods: 2 May 11 18:48:04.959: INFO: Update the DaemonSet to trigger a rollout May 11 18:48:05.686: INFO: Updating DaemonSet daemon-set May 11 18:48:15.531: INFO: Roll back the DaemonSet before rollout is complete May 11 18:48:16.555: INFO: Updating DaemonSet daemon-set May 11 18:48:16.555: INFO: Make sure DaemonSet rollback is complete May 11 18:48:17.037: INFO: Wrong image for pod: daemon-set-xjmrb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 18:48:17.037: INFO: Pod daemon-set-xjmrb is not available May 11 18:48:17.292: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:48:18.405: INFO: Wrong image for pod: daemon-set-xjmrb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 18:48:18.405: INFO: Pod daemon-set-xjmrb is not available May 11 18:48:18.408: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:48:20.034: INFO: Wrong image for pod: daemon-set-xjmrb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 18:48:20.034: INFO: Pod daemon-set-xjmrb is not available May 11 18:48:20.300: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 18:48:21.535: INFO: Pod daemon-set-ncp4f is not available May 11 18:48:21.645: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4396, will wait for the garbage collector to delete the pods May 11 18:48:21.943: INFO: Deleting DaemonSet.extensions daemon-set took: 6.455495ms May 11 18:48:22.444: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.263481ms May 11 18:48:36.068: INFO: Number of nodes with available pods: 0 May 11 18:48:36.068: INFO: Number of running nodes: 0, number of available pods: 0 May 11 18:48:36.155: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4396/daemonsets","resourceVersion":"3523248"},"items":null} May 11 18:48:36.202: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4396/pods","resourceVersion":"3523249"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:48:36.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4396" for this suite. • [SLOW TEST:43.212 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":23,"skipped":355,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:48:36.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-1739 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1739 STEP: Deleting pre-stop pod May 11 18:48:58.910: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:48:58.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1739" for this suite. • [SLOW TEST:22.801 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":24,"skipped":363,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:48:59.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 11 18:49:00.045: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:49:21.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4358" for this suite. • [SLOW TEST:22.302 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":25,"skipped":374,"failed":0} SSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:49:21.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:49:22.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9470" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":26,"skipped":382,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:49:22.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:49:25.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2739" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":27,"skipped":383,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:49:25.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 11 18:49:25.904: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:49:41.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9396" for this suite. • [SLOW TEST:16.167 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":28,"skipped":393,"failed":0} [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:49:41.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2922 May 11 18:49:46.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2922 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 11 18:49:46.327: INFO: stderr: "I0511 18:49:46.259884 177 log.go:172] (0xc00003a4d0) (0xc0006c0be0) Create stream\nI0511 18:49:46.259941 177 log.go:172] (0xc00003a4d0) (0xc0006c0be0) Stream added, broadcasting: 1\nI0511 18:49:46.262694 177 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0511 18:49:46.262748 177 log.go:172] (0xc00003a4d0) (0xc0005d23c0) Create stream\nI0511 18:49:46.262764 177 log.go:172] (0xc00003a4d0) (0xc0005d23c0) Stream added, broadcasting: 3\nI0511 18:49:46.263651 177 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0511 18:49:46.263685 177 log.go:172] (0xc00003a4d0) (0xc0005d3360) Create stream\nI0511 18:49:46.263699 177 log.go:172] (0xc00003a4d0) (0xc0005d3360) Stream added, broadcasting: 5\nI0511 18:49:46.264370 177 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0511 18:49:46.313953 177 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0511 18:49:46.313995 177 log.go:172] (0xc0005d3360) (5) Data frame handling\nI0511 18:49:46.314017 177 log.go:172] (0xc0005d3360) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0511 18:49:46.320004 177 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0511 18:49:46.320046 177 log.go:172] (0xc0005d23c0) (3) Data frame handling\nI0511 18:49:46.320069 177 log.go:172] (0xc0005d23c0) (3) Data frame sent\nI0511 18:49:46.320727 177 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0511 18:49:46.320758 177 log.go:172] (0xc0005d3360) (5) Data frame handling\nI0511 18:49:46.320821 177 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0511 18:49:46.320862 177 log.go:172] (0xc0005d23c0) (3) Data frame handling\nI0511 18:49:46.322614 177 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0511 18:49:46.322640 177 log.go:172] (0xc0006c0be0) (1) Data frame handling\nI0511 18:49:46.322697 177 log.go:172] (0xc0006c0be0) (1) Data frame sent\nI0511 18:49:46.322721 177 log.go:172] (0xc00003a4d0) (0xc0006c0be0) Stream removed, broadcasting: 1\nI0511 18:49:46.322954 177 log.go:172] (0xc00003a4d0) Go away received\nI0511 18:49:46.323126 177 log.go:172] (0xc00003a4d0) (0xc0006c0be0) Stream removed, broadcasting: 1\nI0511 18:49:46.323155 177 log.go:172] (0xc00003a4d0) (0xc0005d23c0) Stream removed, broadcasting: 3\nI0511 18:49:46.323165 177 log.go:172] (0xc00003a4d0) (0xc0005d3360) Stream removed, broadcasting: 5\n" May 11 18:49:46.327: INFO: stdout: "iptables" May 11 18:49:46.327: INFO: proxyMode: iptables May 11 18:49:46.333: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 18:49:46.531: INFO: Pod kube-proxy-mode-detector still exists May 11 18:49:48.531: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 18:49:48.634: INFO: Pod kube-proxy-mode-detector still exists May 11 18:49:50.531: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 18:49:50.535: INFO: Pod kube-proxy-mode-detector still exists May 11 18:49:52.531: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 18:49:52.535: INFO: Pod kube-proxy-mode-detector still exists May 11 18:49:54.531: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 18:49:54.534: INFO: Pod kube-proxy-mode-detector still exists May 11 18:49:56.531: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 18:49:58.012: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2922 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2922 I0511 18:49:58.443078 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2922, replica count: 3 I0511 18:50:01.493603 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:50:04.493867 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:50:07.494087 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 18:50:10.494250 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 18:50:10.498: INFO: Creating new exec pod May 11 18:50:19.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpod-affinity6lqr5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 11 18:50:20.235: INFO: stderr: "I0511 18:50:20.140345 196 log.go:172] (0xc000a000b0) (0xc00067c640) Create stream\nI0511 18:50:20.140393 196 log.go:172] (0xc000a000b0) (0xc00067c640) Stream added, broadcasting: 1\nI0511 18:50:20.142683 196 log.go:172] (0xc000a000b0) Reply frame received for 1\nI0511 18:50:20.142732 196 log.go:172] (0xc000a000b0) (0xc000430280) Create stream\nI0511 18:50:20.142752 196 log.go:172] (0xc000a000b0) (0xc000430280) Stream added, broadcasting: 3\nI0511 18:50:20.143585 196 log.go:172] (0xc000a000b0) Reply frame received for 3\nI0511 18:50:20.143641 196 log.go:172] (0xc000a000b0) (0xc000430a00) Create stream\nI0511 18:50:20.143663 196 log.go:172] (0xc000a000b0) (0xc000430a00) Stream added, broadcasting: 5\nI0511 18:50:20.144775 196 log.go:172] (0xc000a000b0) Reply frame received for 5\nI0511 18:50:20.229945 196 log.go:172] (0xc000a000b0) Data frame received for 3\nI0511 18:50:20.229995 196 log.go:172] (0xc000430280) (3) Data frame handling\nI0511 18:50:20.230037 196 log.go:172] (0xc000a000b0) Data frame received for 5\nI0511 18:50:20.230062 196 log.go:172] (0xc000430a00) (5) Data frame handling\nI0511 18:50:20.230086 196 log.go:172] (0xc000430a00) (5) Data frame sent\nI0511 18:50:20.230105 196 log.go:172] (0xc000a000b0) Data frame received for 5\nI0511 18:50:20.230117 196 log.go:172] (0xc000430a00) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0511 18:50:20.231188 196 log.go:172] (0xc000a000b0) Data frame received for 1\nI0511 18:50:20.231199 196 log.go:172] (0xc00067c640) (1) Data frame handling\nI0511 18:50:20.231205 196 log.go:172] (0xc00067c640) (1) Data frame sent\nI0511 18:50:20.231331 196 log.go:172] (0xc000a000b0) (0xc00067c640) Stream removed, broadcasting: 1\nI0511 18:50:20.231560 196 log.go:172] (0xc000a000b0) (0xc00067c640) Stream removed, broadcasting: 1\nI0511 18:50:20.231575 196 log.go:172] (0xc000a000b0) (0xc000430280) Stream removed, broadcasting: 3\nI0511 18:50:20.231583 196 log.go:172] (0xc000a000b0) (0xc000430a00) Stream removed, broadcasting: 5\n" May 11 18:50:20.235: INFO: stdout: "" May 11 18:50:20.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpod-affinity6lqr5 -- /bin/sh -x -c nc -zv -t -w 2 10.106.92.66 80' May 11 18:50:20.437: INFO: stderr: "I0511 18:50:20.366039 216 log.go:172] (0xc000a7b760) (0xc000b80140) Create stream\nI0511 18:50:20.366101 216 log.go:172] (0xc000a7b760) (0xc000b80140) Stream added, broadcasting: 1\nI0511 18:50:20.369953 216 log.go:172] (0xc000a7b760) Reply frame received for 1\nI0511 18:50:20.369986 216 log.go:172] (0xc000a7b760) (0xc0003e0460) Create stream\nI0511 18:50:20.369995 216 log.go:172] (0xc000a7b760) (0xc0003e0460) Stream added, broadcasting: 3\nI0511 18:50:20.370953 216 log.go:172] (0xc000a7b760) Reply frame received for 3\nI0511 18:50:20.370980 216 log.go:172] (0xc000a7b760) (0xc0006c8e60) Create stream\nI0511 18:50:20.370988 216 log.go:172] (0xc000a7b760) (0xc0006c8e60) Stream added, broadcasting: 5\nI0511 18:50:20.371790 216 log.go:172] (0xc000a7b760) Reply frame received for 5\nI0511 18:50:20.432160 216 log.go:172] (0xc000a7b760) Data frame received for 3\nI0511 18:50:20.432195 216 log.go:172] (0xc0003e0460) (3) Data frame handling\nI0511 18:50:20.432517 216 log.go:172] (0xc000a7b760) Data frame received for 5\nI0511 18:50:20.432536 216 log.go:172] (0xc0006c8e60) (5) Data frame handling\nI0511 18:50:20.432552 216 log.go:172] (0xc0006c8e60) (5) Data frame sent\nI0511 18:50:20.432563 216 log.go:172] (0xc000a7b760) Data frame received for 5\nI0511 18:50:20.432574 216 log.go:172] (0xc0006c8e60) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.92.66 80\nConnection to 10.106.92.66 80 port [tcp/http] succeeded!\nI0511 18:50:20.433748 216 log.go:172] (0xc000a7b760) Data frame received for 1\nI0511 18:50:20.433774 216 log.go:172] (0xc000b80140) (1) Data frame handling\nI0511 18:50:20.433787 216 log.go:172] (0xc000b80140) (1) Data frame sent\nI0511 18:50:20.433797 216 log.go:172] (0xc000a7b760) (0xc000b80140) Stream removed, broadcasting: 1\nI0511 18:50:20.433817 216 log.go:172] (0xc000a7b760) Go away received\nI0511 18:50:20.434119 216 log.go:172] (0xc000a7b760) (0xc000b80140) Stream removed, broadcasting: 1\nI0511 18:50:20.434134 216 log.go:172] (0xc000a7b760) (0xc0003e0460) Stream removed, broadcasting: 3\nI0511 18:50:20.434150 216 log.go:172] (0xc000a7b760) (0xc0006c8e60) Stream removed, broadcasting: 5\n" May 11 18:50:20.437: INFO: stdout: "" May 11 18:50:20.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpod-affinity6lqr5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.92.66:80/ ; done' May 11 18:50:20.711: INFO: stderr: "I0511 18:50:20.574155 235 log.go:172] (0xc000bd0420) (0xc0003688c0) Create stream\nI0511 18:50:20.574211 235 log.go:172] (0xc000bd0420) (0xc0003688c0) Stream added, broadcasting: 1\nI0511 18:50:20.576308 235 log.go:172] (0xc000bd0420) Reply frame received for 1\nI0511 18:50:20.576326 235 log.go:172] (0xc000bd0420) (0xc000368dc0) Create stream\nI0511 18:50:20.576340 235 log.go:172] (0xc000bd0420) (0xc000368dc0) Stream added, broadcasting: 3\nI0511 18:50:20.576987 235 log.go:172] (0xc000bd0420) Reply frame received for 3\nI0511 18:50:20.577013 235 log.go:172] (0xc000bd0420) (0xc00014e0a0) Create stream\nI0511 18:50:20.577027 235 log.go:172] (0xc000bd0420) (0xc00014e0a0) Stream added, broadcasting: 5\nI0511 18:50:20.577838 235 log.go:172] (0xc000bd0420) Reply frame received for 5\nI0511 18:50:20.630573 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.630716 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.630754 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.630889 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.630918 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.630932 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.634029 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.634050 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.634069 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.634399 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.634421 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.634442 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.634486 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.634499 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.634507 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.639368 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.639384 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.639408 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.639825 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.639847 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.639873 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\nI0511 18:50:20.639891 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.639906 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0511 18:50:20.639925 235 log.go:172] (0xc000bd0420) Data frame received for 3\n http://10.106.92.66:80/\nI0511 18:50:20.639949 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\nI0511 18:50:20.639963 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.639988 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.643162 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.643190 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.643209 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.643439 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.643452 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.643465 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.643511 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.643524 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.643564 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.646578 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.646605 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.646628 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.646878 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.646890 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.646913 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.646926 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.646935 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.646940 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.650379 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.650394 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.650399 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.651009 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.651038 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.651065 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.651087 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.651097 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.651125 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.655401 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.655421 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.655436 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.655815 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.655832 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.655841 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.655849 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.655854 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.655860 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.659650 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.659671 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.659683 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.660062 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.660076 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.660090 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.660107 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.660125 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.660142 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.663624 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.663647 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.663667 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.664011 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.664039 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.664056 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.664070 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.664077 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.664085 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.667779 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.667799 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.667813 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.668202 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.668214 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.668224 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.668270 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.668291 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.668302 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.672163 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.672174 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.672181 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.672794 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.672810 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.672823 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.672873 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.672888 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.672901 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.677567 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.677587 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.677608 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.677903 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.677915 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.677923 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.677938 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.677951 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.677970 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.682969 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.682981 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.682988 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.683474 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.683485 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.683492 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.683518 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.683538 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.683563 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\nI0511 18:50:20.683574 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.683581 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.683605 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\nI0511 18:50:20.687282 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.687298 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.687311 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.687811 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.687824 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.687833 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.687849 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.687858 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.687865 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.692155 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.692172 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.692186 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.692647 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.692662 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.692676 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.692709 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.692725 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.692740 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.699469 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.699517 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.699542 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.700199 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.700216 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.700229 235 log.go:172] (0xc00014e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.700254 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.700286 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.700301 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.705744 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.705771 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.705791 235 log.go:172] (0xc000368dc0) (3) Data frame sent\nI0511 18:50:20.706284 235 log.go:172] (0xc000bd0420) Data frame received for 5\nI0511 18:50:20.706320 235 log.go:172] (0xc00014e0a0) (5) Data frame handling\nI0511 18:50:20.706350 235 log.go:172] (0xc000bd0420) Data frame received for 3\nI0511 18:50:20.706368 235 log.go:172] (0xc000368dc0) (3) Data frame handling\nI0511 18:50:20.707729 235 log.go:172] (0xc000bd0420) Data frame received for 1\nI0511 18:50:20.707744 235 log.go:172] (0xc0003688c0) (1) Data frame handling\nI0511 18:50:20.707759 235 log.go:172] (0xc0003688c0) (1) Data frame sent\nI0511 18:50:20.707822 235 log.go:172] (0xc000bd0420) (0xc0003688c0) Stream removed, broadcasting: 1\nI0511 18:50:20.708025 235 log.go:172] (0xc000bd0420) Go away received\nI0511 18:50:20.708203 235 log.go:172] (0xc000bd0420) (0xc0003688c0) Stream removed, broadcasting: 1\nI0511 18:50:20.708231 235 log.go:172] (0xc000bd0420) (0xc000368dc0) Stream removed, broadcasting: 3\nI0511 18:50:20.708244 235 log.go:172] (0xc000bd0420) (0xc00014e0a0) Stream removed, broadcasting: 5\n" May 11 18:50:20.712: INFO: stdout: "\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9\naffinity-clusterip-timeout-w8tr9" May 11 18:50:20.712: INFO: Received response from host: May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Received response from host: affinity-clusterip-timeout-w8tr9 May 11 18:50:20.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpod-affinity6lqr5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.106.92.66:80/' May 11 18:50:20.915: INFO: stderr: "I0511 18:50:20.846407 255 log.go:172] (0xc000953340) (0xc000a6e1e0) Create stream\nI0511 18:50:20.846471 255 log.go:172] (0xc000953340) (0xc000a6e1e0) Stream added, broadcasting: 1\nI0511 18:50:20.848798 255 log.go:172] (0xc000953340) Reply frame received for 1\nI0511 18:50:20.848837 255 log.go:172] (0xc000953340) (0xc000720e60) Create stream\nI0511 18:50:20.848850 255 log.go:172] (0xc000953340) (0xc000720e60) Stream added, broadcasting: 3\nI0511 18:50:20.849898 255 log.go:172] (0xc000953340) Reply frame received for 3\nI0511 18:50:20.849944 255 log.go:172] (0xc000953340) (0xc000721400) Create stream\nI0511 18:50:20.849967 255 log.go:172] (0xc000953340) (0xc000721400) Stream added, broadcasting: 5\nI0511 18:50:20.850638 255 log.go:172] (0xc000953340) Reply frame received for 5\nI0511 18:50:20.905069 255 log.go:172] (0xc000953340) Data frame received for 5\nI0511 18:50:20.905108 255 log.go:172] (0xc000721400) (5) Data frame handling\nI0511 18:50:20.905317 255 log.go:172] (0xc000721400) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:20.908715 255 log.go:172] (0xc000953340) Data frame received for 3\nI0511 18:50:20.908734 255 log.go:172] (0xc000720e60) (3) Data frame handling\nI0511 18:50:20.908749 255 log.go:172] (0xc000720e60) (3) Data frame sent\nI0511 18:50:20.909523 255 log.go:172] (0xc000953340) Data frame received for 5\nI0511 18:50:20.909540 255 log.go:172] (0xc000721400) (5) Data frame handling\nI0511 18:50:20.909569 255 log.go:172] (0xc000953340) Data frame received for 3\nI0511 18:50:20.909600 255 log.go:172] (0xc000720e60) (3) Data frame handling\nI0511 18:50:20.911009 255 log.go:172] (0xc000953340) Data frame received for 1\nI0511 18:50:20.911026 255 log.go:172] (0xc000a6e1e0) (1) Data frame handling\nI0511 18:50:20.911038 255 log.go:172] (0xc000a6e1e0) (1) Data frame sent\nI0511 18:50:20.911073 255 log.go:172] (0xc000953340) (0xc000a6e1e0) Stream removed, broadcasting: 1\nI0511 18:50:20.911092 255 log.go:172] (0xc000953340) Go away received\nI0511 18:50:20.911436 255 log.go:172] (0xc000953340) (0xc000a6e1e0) Stream removed, broadcasting: 1\nI0511 18:50:20.911457 255 log.go:172] (0xc000953340) (0xc000720e60) Stream removed, broadcasting: 3\nI0511 18:50:20.911470 255 log.go:172] (0xc000953340) (0xc000721400) Stream removed, broadcasting: 5\n" May 11 18:50:20.915: INFO: stdout: "affinity-clusterip-timeout-w8tr9" May 11 18:50:35.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpod-affinity6lqr5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.106.92.66:80/' May 11 18:50:36.136: INFO: stderr: "I0511 18:50:36.055550 274 log.go:172] (0xc000a4d550) (0xc000b1a1e0) Create stream\nI0511 18:50:36.055596 274 log.go:172] (0xc000a4d550) (0xc000b1a1e0) Stream added, broadcasting: 1\nI0511 18:50:36.058817 274 log.go:172] (0xc000a4d550) Reply frame received for 1\nI0511 18:50:36.058848 274 log.go:172] (0xc000a4d550) (0xc00020afa0) Create stream\nI0511 18:50:36.058866 274 log.go:172] (0xc000a4d550) (0xc00020afa0) Stream added, broadcasting: 3\nI0511 18:50:36.059736 274 log.go:172] (0xc000a4d550) Reply frame received for 3\nI0511 18:50:36.059766 274 log.go:172] (0xc000a4d550) (0xc000b1a280) Create stream\nI0511 18:50:36.059777 274 log.go:172] (0xc000a4d550) (0xc000b1a280) Stream added, broadcasting: 5\nI0511 18:50:36.060635 274 log.go:172] (0xc000a4d550) Reply frame received for 5\nI0511 18:50:36.125659 274 log.go:172] (0xc000a4d550) Data frame received for 5\nI0511 18:50:36.125682 274 log.go:172] (0xc000b1a280) (5) Data frame handling\nI0511 18:50:36.125698 274 log.go:172] (0xc000b1a280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.106.92.66:80/\nI0511 18:50:36.129859 274 log.go:172] (0xc000a4d550) Data frame received for 3\nI0511 18:50:36.129994 274 log.go:172] (0xc00020afa0) (3) Data frame handling\nI0511 18:50:36.130037 274 log.go:172] (0xc00020afa0) (3) Data frame sent\nI0511 18:50:36.130294 274 log.go:172] (0xc000a4d550) Data frame received for 3\nI0511 18:50:36.130314 274 log.go:172] (0xc00020afa0) (3) Data frame handling\nI0511 18:50:36.130503 274 log.go:172] (0xc000a4d550) Data frame received for 5\nI0511 18:50:36.130527 274 log.go:172] (0xc000b1a280) (5) Data frame handling\nI0511 18:50:36.132028 274 log.go:172] (0xc000a4d550) Data frame received for 1\nI0511 18:50:36.132047 274 log.go:172] (0xc000b1a1e0) (1) Data frame handling\nI0511 18:50:36.132057 274 log.go:172] (0xc000b1a1e0) (1) Data frame sent\nI0511 18:50:36.132069 274 log.go:172] (0xc000a4d550) (0xc000b1a1e0) Stream removed, broadcasting: 1\nI0511 18:50:36.132139 274 log.go:172] (0xc000a4d550) Go away received\nI0511 18:50:36.132331 274 log.go:172] (0xc000a4d550) (0xc000b1a1e0) Stream removed, broadcasting: 1\nI0511 18:50:36.132348 274 log.go:172] (0xc000a4d550) (0xc00020afa0) Stream removed, broadcasting: 3\nI0511 18:50:36.132357 274 log.go:172] (0xc000a4d550) (0xc000b1a280) Stream removed, broadcasting: 5\n" May 11 18:50:36.136: INFO: stdout: "affinity-clusterip-timeout-g5tls" May 11 18:50:36.136: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2922, will wait for the garbage collector to delete the pods May 11 18:50:36.650: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 418.100406ms May 11 18:50:37.150: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.21813ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:50:55.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2922" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:73.713 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":29,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:50:55.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 18:51:07.683: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:51:08.010: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:51:10.010: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:51:10.059: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:51:12.010: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:51:12.014: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:51:14.010: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:51:14.015: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:51:16.010: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:51:16.683: INFO: Pod pod-with-poststart-exec-hook still exists May 11 18:51:18.010: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 18:51:18.013: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:51:18.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3086" for this suite. • [SLOW TEST:22.521 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":30,"skipped":422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:51:18.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-4b6caaad-78bf-4cd5-8e79-a677c40b6371 STEP: Creating a pod to test consume secrets May 11 18:51:18.311: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa" in namespace "projected-867" to be "Succeeded or Failed" May 11 18:51:18.335: INFO: Pod "pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa": Phase="Pending", Reason="", readiness=false. Elapsed: 23.960688ms May 11 18:51:20.694: INFO: Pod "pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383709565s May 11 18:51:22.699: INFO: Pod "pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.387972975s May 11 18:51:24.702: INFO: Pod "pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa": Phase="Running", Reason="", readiness=true. Elapsed: 6.391052167s May 11 18:51:26.825: INFO: Pod "pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.514228355s STEP: Saw pod success May 11 18:51:26.825: INFO: Pod "pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa" satisfied condition "Succeeded or Failed" May 11 18:51:26.827: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa container projected-secret-volume-test: STEP: delete the pod May 11 18:51:27.654: INFO: Waiting for pod pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa to disappear May 11 18:51:27.678: INFO: Pod pod-projected-secrets-5e103e88-1aac-42b2-9144-b5f04cf84caa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:51:27.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-867" for this suite. • [SLOW TEST:9.665 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":31,"skipped":458,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:51:27.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:51:28.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7894" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":32,"skipped":465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:51:28.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-e691a6e7-ac9b-4886-a3cc-86d79aab4f33 STEP: Creating a pod to test consume configMaps May 11 18:51:29.927: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182" in namespace "projected-4237" to be "Succeeded or Failed" May 11 18:51:30.215: INFO: Pod "pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182": Phase="Pending", Reason="", readiness=false. Elapsed: 287.490088ms May 11 18:51:32.335: INFO: Pod "pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407774172s May 11 18:51:34.445: INFO: Pod "pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182": Phase="Pending", Reason="", readiness=false. Elapsed: 4.517659265s May 11 18:51:36.714: INFO: Pod "pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.786742218s STEP: Saw pod success May 11 18:51:36.714: INFO: Pod "pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182" satisfied condition "Succeeded or Failed" May 11 18:51:36.716: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182 container projected-configmap-volume-test: STEP: delete the pod May 11 18:51:37.294: INFO: Waiting for pod pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182 to disappear May 11 18:51:37.299: INFO: Pod pod-projected-configmaps-ad3bd303-fa86-46a6-9e91-aab176a13182 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:51:37.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4237" for this suite. • [SLOW TEST:9.070 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":33,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:51:37.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 18:51:37.508: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Pending, waiting for it to be Running (with Ready = true) May 11 18:51:41.479: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Pending, waiting for it to be Running (with Ready = true) May 11 18:51:41.571: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Pending, waiting for it to be Running (with Ready = true) May 11 18:51:43.688: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Pending, waiting for it to be Running (with Ready = true) May 11 18:51:45.511: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Pending, waiting for it to be Running (with Ready = true) May 11 18:51:47.511: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:51:49.510: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:51:51.511: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:51:53.511: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:51:55.598: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:51:57.511: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:51:59.510: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:52:01.544: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:52:03.837: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = false) May 11 18:52:05.513: INFO: The status of Pod test-webserver-71c174c3-5825-4b86-b3fd-d13e19942b41 is Running (Ready = true) May 11 18:52:05.516: INFO: Container started at 2020-05-11 18:51:45 +0000 UTC, pod became ready at 2020-05-11 18:52:03 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:52:05.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7083" for this suite. • [SLOW TEST:28.180 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":535,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:52:05.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-38ddad54-3898-4a1f-88a2-6cf172b042cb STEP: Creating a pod to test consume secrets May 11 18:52:06.121: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae" in namespace "projected-1853" to be "Succeeded or Failed" May 11 18:52:06.311: INFO: Pod "pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 189.654706ms May 11 18:52:08.315: INFO: Pod "pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193960073s May 11 18:52:10.520: INFO: Pod "pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.399352178s May 11 18:52:12.733: INFO: Pod "pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae": Phase="Running", Reason="", readiness=true. Elapsed: 6.612601485s May 11 18:52:15.018: INFO: Pod "pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.896814607s STEP: Saw pod success May 11 18:52:15.018: INFO: Pod "pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae" satisfied condition "Succeeded or Failed" May 11 18:52:15.020: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae container secret-volume-test: STEP: delete the pod May 11 18:52:15.330: INFO: Waiting for pod pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae to disappear May 11 18:52:15.490: INFO: Pod pod-projected-secrets-5e80a974-f3ae-4de7-b5ec-afdd34ddf8ae no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:52:15.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1853" for this suite. • [SLOW TEST:9.971 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:52:15.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 11 18:52:15.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 11 18:52:15.985: INFO: stderr: "" May 11 18:52:15.985: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:52:15.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8871" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":36,"skipped":564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:52:15.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 18:52:18.393: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 18:52:20.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:52:24.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:52:26.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:52:26.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724819938, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 18:52:30.683: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 11 18:52:31.086: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:52:31.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8628" for this suite. STEP: Destroying namespace "webhook-8628-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.681 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":37,"skipped":589,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:52:32.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 18:52:33.553: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 18:52:35.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5025 create -f -' May 11 18:52:51.018: INFO: stderr: "" May 11 18:52:51.018: INFO: stdout: "e2e-test-crd-publish-openapi-5325-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 11 18:52:51.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5025 delete e2e-test-crd-publish-openapi-5325-crds test-cr' May 11 18:52:51.721: INFO: stderr: "" May 11 18:52:51.721: INFO: stdout: "e2e-test-crd-publish-openapi-5325-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 11 18:52:51.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5025 apply -f -' May 11 18:52:52.432: INFO: stderr: "" May 11 18:52:52.432: INFO: stdout: "e2e-test-crd-publish-openapi-5325-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 11 18:52:52.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5025 delete e2e-test-crd-publish-openapi-5325-crds test-cr' May 11 18:52:52.626: INFO: stderr: "" May 11 18:52:52.626: INFO: stdout: "e2e-test-crd-publish-openapi-5325-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 11 18:52:52.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5325-crds' May 11 18:52:52.953: INFO: stderr: "" May 11 18:52:52.953: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5325-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:52:55.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5025" for this suite. • [SLOW TEST:23.489 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":38,"skipped":604,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:52:56.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:53:13.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4201" for this suite. • [SLOW TEST:17.012 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":39,"skipped":611,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:53:13.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 11 18:53:22.888: INFO: Successfully updated pod "annotationupdate4fb4c889-faca-47a2-9572-69ea422a66eb" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:53:25.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7476" for this suite. • [SLOW TEST:12.061 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":617,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:53:25.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 18:53:26.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 18:53:28.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820006, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820006, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820006, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820006, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:53:30.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820006, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820006, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820006, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820006, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 18:53:33.848: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 18:53:34.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:53:35.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7626" for this suite. STEP: Destroying namespace "webhook-7626-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.464 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":41,"skipped":630,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:53:35.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 11 18:53:42.132: INFO: Pod pod-hostip-78bee335-7651-4cbe-846b-e216ab06a21f has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:53:42.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9939" for this suite. • [SLOW TEST:6.493 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":42,"skipped":635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:53:42.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 18:55:43.359: INFO: Deleting pod "var-expansion-2f4878d0-c153-4d95-9927-65023adeea50" in namespace "var-expansion-9087" May 11 18:55:43.363: INFO: Wait up to 5m0s for pod "var-expansion-2f4878d0-c153-4d95-9927-65023adeea50" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:55:47.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9087" for this suite. • [SLOW TEST:125.573 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":43,"skipped":659,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:55:47.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 11 18:55:57.300: INFO: Successfully updated pod "annotationupdatef860329d-d10a-466d-a54a-55f6f4ab90db" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:55:59.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-820" for this suite. • [SLOW TEST:11.714 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":44,"skipped":666,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:55:59.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-e8d8f320-34bf-4102-a696-520de90a3fe6 STEP: Creating configMap with name cm-test-opt-upd-18411d3b-8641-4a62-a155-20486b3cae46 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e8d8f320-34bf-4102-a696-520de90a3fe6 STEP: Updating configmap cm-test-opt-upd-18411d3b-8641-4a62-a155-20486b3cae46 STEP: Creating configMap with name cm-test-opt-create-d9eedc4e-a923-4353-9fe1-ec0b5056c5a4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:57:37.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2019" for this suite. • [SLOW TEST:98.068 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":45,"skipped":669,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:57:37.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0511 18:57:48.233664 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 18:57:48.233: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:57:48.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6789" for this suite. • [SLOW TEST:10.689 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":46,"skipped":673,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:57:48.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:58:32.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1221" for this suite. • [SLOW TEST:43.972 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":47,"skipped":685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:58:32.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 11 18:58:47.723: INFO: 5 pods remaining May 11 18:58:47.723: INFO: 5 pods has nil DeletionTimestamp May 11 18:58:47.723: INFO: STEP: Gathering metrics W0511 18:58:51.843283 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 18:58:51.843: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:58:51.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7396" for this suite. • [SLOW TEST:20.637 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":48,"skipped":735,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:58:52.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:58:55.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-441" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":49,"skipped":745,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:58:55.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 18:58:55.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d" in namespace "downward-api-9705" to be "Succeeded or Failed" May 11 18:58:55.675: INFO: Pod "downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.517018ms May 11 18:58:57.678: INFO: Pod "downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033992867s May 11 18:58:59.737: INFO: Pod "downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093498602s May 11 18:59:02.382: INFO: Pod "downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.73817757s STEP: Saw pod success May 11 18:59:02.382: INFO: Pod "downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d" satisfied condition "Succeeded or Failed" May 11 18:59:02.643: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d container client-container: STEP: delete the pod May 11 18:59:03.831: INFO: Waiting for pod downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d to disappear May 11 18:59:03.849: INFO: Pod downwardapi-volume-d73fe548-37b7-4e4a-a172-d104c509be8d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:59:03.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9705" for this suite. • [SLOW TEST:8.675 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":50,"skipped":746,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:59:03.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 18:59:04.912: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 11 18:59:09.962: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 18:59:17.184: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 11 18:59:19.188: INFO: Creating deployment "test-rollover-deployment" May 11 18:59:19.858: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 11 18:59:24.455: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 11 18:59:24.720: INFO: Ensure that both replica sets have 1 created replica May 11 18:59:24.819: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 11 18:59:25.005: INFO: Updating deployment test-rollover-deployment May 11 18:59:25.005: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 11 18:59:30.015: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 11 18:59:31.067: INFO: Make sure deployment "test-rollover-deployment" is complete May 11 18:59:31.356: INFO: all replica sets need to contain the pod-template-hash label May 11 18:59:31.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820371, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820360, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:59:33.365: INFO: all replica sets need to contain the pod-template-hash label May 11 18:59:33.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820371, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820360, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:59:35.363: INFO: all replica sets need to contain the pod-template-hash label May 11 18:59:35.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820373, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820360, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:59:37.361: INFO: all replica sets need to contain the pod-template-hash label May 11 18:59:37.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820373, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820360, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:59:39.408: INFO: all replica sets need to contain the pod-template-hash label May 11 18:59:39.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820373, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820360, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:59:41.365: INFO: all replica sets need to contain the pod-template-hash label May 11 18:59:41.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820373, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820360, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:59:43.538: INFO: all replica sets need to contain the pod-template-hash label May 11 18:59:43.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820362, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820373, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820360, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 18:59:45.364: INFO: May 11 18:59:45.364: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 18:59:45.376: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8325 /apis/apps/v1/namespaces/deployment-8325/deployments/test-rollover-deployment 006f5e35-54cd-4579-aed1-27884dfb9d69 3527418 2 2020-05-11 18:59:19 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-11 18:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047a3218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 18:59:22 +0000 UTC,LastTransitionTime:2020-05-11 18:59:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-11 18:59:44 +0000 UTC,LastTransitionTime:2020-05-11 18:59:20 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 18:59:45.379: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-8325 /apis/apps/v1/namespaces/deployment-8325/replicasets/test-rollover-deployment-7c4fd9c879 59ec5c3e-0105-4e7e-b546-ce8552c1325f 3527407 2 2020-05-11 18:59:25 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 006f5e35-54cd-4579-aed1-27884dfb9d69 0xc0047a3847 0xc0047a3848}] [] [{kube-controller-manager Update apps/v1 2020-05-11 18:59:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"006f5e35-54cd-4579-aed1-27884dfb9d69\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047a38d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 18:59:45.379: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 11 18:59:45.379: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8325 /apis/apps/v1/namespaces/deployment-8325/replicasets/test-rollover-controller 60aafec7-a182-467b-87d0-da422ff93e7b 3527417 2 2020-05-11 18:59:04 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 006f5e35-54cd-4579-aed1-27884dfb9d69 0xc0047a360f 0xc0047a3630}] [] [{e2e.test Update apps/v1 2020-05-11 18:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"006f5e35-54cd-4579-aed1-27884dfb9d69\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0047a36d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 18:59:45.379: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-8325 /apis/apps/v1/namespaces/deployment-8325/replicasets/test-rollover-deployment-5686c4cfd5 41340873-b908-4815-885d-7b2bce7fb772 3527342 2 2020-05-11 18:59:20 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 006f5e35-54cd-4579-aed1-27884dfb9d69 0xc0047a3747 0xc0047a3748}] [] [{kube-controller-manager Update apps/v1 2020-05-11 18:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"006f5e35-54cd-4579-aed1-27884dfb9d69\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047a37d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 18:59:45.382: INFO: Pod "test-rollover-deployment-7c4fd9c879-xpxdl" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-xpxdl test-rollover-deployment-7c4fd9c879- deployment-8325 /api/v1/namespaces/deployment-8325/pods/test-rollover-deployment-7c4fd9c879-xpxdl f854df33-f0c8-4d2c-933a-888e5bf938cb 3527360 0 2020-05-11 18:59:28 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 59ec5c3e-0105-4e7e-b546-ce8552c1325f 0xc0068b1827 0xc0068b1828}] [] [{kube-controller-manager Update v1 2020-05-11 18:59:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59ec5c3e-0105-4e7e-b546-ce8552c1325f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 18:59:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtz9q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtz9q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtz9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 18:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 18:59:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 18:59:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 18:59:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.71,StartTime:2020-05-11 18:59:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 18:59:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://31b4431be31ffe1e83b748b9f967c9e9957a90056984cc0ea43ae753bf39b5f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:59:45.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8325" for this suite. • [SLOW TEST:41.514 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":51,"skipped":751,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:59:45.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8f3ccca2-4395-4363-b0fc-f304c59507b3 STEP: Creating a pod to test consume secrets May 11 18:59:45.910: INFO: Waiting up to 5m0s for pod "pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7" in namespace "secrets-2730" to be "Succeeded or Failed" May 11 18:59:45.915: INFO: Pod "pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.550226ms May 11 18:59:47.931: INFO: Pod "pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021646478s May 11 18:59:49.935: INFO: Pod "pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025445534s May 11 18:59:51.975: INFO: Pod "pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065750273s STEP: Saw pod success May 11 18:59:51.976: INFO: Pod "pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7" satisfied condition "Succeeded or Failed" May 11 18:59:51.978: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7 container secret-volume-test: STEP: delete the pod May 11 18:59:52.439: INFO: Waiting for pod pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7 to disappear May 11 18:59:52.461: INFO: Pod pod-secrets-92645c08-a8b0-40b9-abef-f1408ce416c7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:59:52.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2730" for this suite. • [SLOW TEST:7.085 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":760,"failed":0} SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 18:59:52.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 18:59:52.567: INFO: Waiting up to 5m0s for pod "downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e" in namespace "downward-api-3466" to be "Succeeded or Failed" May 11 18:59:52.708: INFO: Pod "downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e": Phase="Pending", Reason="", readiness=false. Elapsed: 141.397348ms May 11 18:59:54.839: INFO: Pod "downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271992454s May 11 18:59:57.109: INFO: Pod "downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e": Phase="Running", Reason="", readiness=true. Elapsed: 4.542298743s May 11 18:59:59.254: INFO: Pod "downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.686622442s STEP: Saw pod success May 11 18:59:59.254: INFO: Pod "downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e" satisfied condition "Succeeded or Failed" May 11 18:59:59.263: INFO: Trying to get logs from node latest-worker2 pod downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e container dapi-container: STEP: delete the pod May 11 18:59:59.803: INFO: Waiting for pod downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e to disappear May 11 18:59:59.887: INFO: Pod downward-api-9df7afb8-2187-4cc9-a782-8cf9b416d83e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 18:59:59.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3466" for this suite. • [SLOW TEST:7.566 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":762,"failed":0} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:00:00.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4863 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4863;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4863 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4863;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4863.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4863.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4863.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4863.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4863.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4863.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4863.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4863.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4863.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4863.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4863.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 204.169.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.169.204_udp@PTR;check="$$(dig +tcp +noall +answer +search 204.169.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.169.204_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4863 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4863;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4863 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4863;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4863.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4863.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4863.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4863.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4863.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4863.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4863.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4863.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4863.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4863.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4863.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4863.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 204.169.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.169.204_udp@PTR;check="$$(dig +tcp +noall +answer +search 204.169.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.169.204_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 19:00:19.666: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:19.739: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:19.743: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:19.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:19.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:19.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:19.997: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.000: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.116: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.119: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.122: INFO: Unable to read jessie_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.124: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.127: INFO: Unable to read jessie_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.130: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.133: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.386: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:20.407: INFO: Lookups using dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4863 wheezy_tcp@dns-test-service.dns-4863 wheezy_udp@dns-test-service.dns-4863.svc wheezy_tcp@dns-test-service.dns-4863.svc wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4863 jessie_tcp@dns-test-service.dns-4863 jessie_udp@dns-test-service.dns-4863.svc jessie_tcp@dns-test-service.dns-4863.svc jessie_udp@_http._tcp.dns-test-service.dns-4863.svc jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc] May 11 19:00:25.412: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.415: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.419: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.422: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.426: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.457: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.461: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.480: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.483: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.485: INFO: Unable to read jessie_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.486: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.488: INFO: Unable to read jessie_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.490: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.492: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.495: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:25.511: INFO: Lookups using dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4863 wheezy_tcp@dns-test-service.dns-4863 wheezy_udp@dns-test-service.dns-4863.svc wheezy_tcp@dns-test-service.dns-4863.svc wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4863 jessie_tcp@dns-test-service.dns-4863 jessie_udp@dns-test-service.dns-4863.svc jessie_tcp@dns-test-service.dns-4863.svc jessie_udp@_http._tcp.dns-test-service.dns-4863.svc jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc] May 11 19:00:30.470: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.474: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.477: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.479: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.482: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.484: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.487: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.490: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.666: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.872: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:30.876: INFO: Unable to read jessie_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:31.249: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:31.253: INFO: Unable to read jessie_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:31.256: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:31.259: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:31.262: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:31.603: INFO: Lookups using dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4863 wheezy_tcp@dns-test-service.dns-4863 wheezy_udp@dns-test-service.dns-4863.svc wheezy_tcp@dns-test-service.dns-4863.svc wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4863 jessie_tcp@dns-test-service.dns-4863 jessie_udp@dns-test-service.dns-4863.svc jessie_tcp@dns-test-service.dns-4863.svc jessie_udp@_http._tcp.dns-test-service.dns-4863.svc jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc] May 11 19:00:35.411: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.414: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.417: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.419: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.422: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.424: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.426: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.429: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.451: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.454: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.456: INFO: Unable to read jessie_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.459: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.462: INFO: Unable to read jessie_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.464: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.466: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.468: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:35.482: INFO: Lookups using dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4863 wheezy_tcp@dns-test-service.dns-4863 wheezy_udp@dns-test-service.dns-4863.svc wheezy_tcp@dns-test-service.dns-4863.svc wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4863 jessie_tcp@dns-test-service.dns-4863 jessie_udp@dns-test-service.dns-4863.svc jessie_tcp@dns-test-service.dns-4863.svc jessie_udp@_http._tcp.dns-test-service.dns-4863.svc jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc] May 11 19:00:40.411: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.414: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.418: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.422: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.424: INFO: Unable to read wheezy_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.428: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.430: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.433: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.474: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.476: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.479: INFO: Unable to read jessie_udp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.482: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863 from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.485: INFO: Unable to read jessie_udp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.488: INFO: Unable to read jessie_tcp@dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.490: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.494: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc from pod dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3: the server could not find the requested resource (get pods dns-test-1e8c9189-5d24-4483-a270-1263423c39a3) May 11 19:00:40.508: INFO: Lookups using dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4863 wheezy_tcp@dns-test-service.dns-4863 wheezy_udp@dns-test-service.dns-4863.svc wheezy_tcp@dns-test-service.dns-4863.svc wheezy_udp@_http._tcp.dns-test-service.dns-4863.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4863.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4863 jessie_tcp@dns-test-service.dns-4863 jessie_udp@dns-test-service.dns-4863.svc jessie_tcp@dns-test-service.dns-4863.svc jessie_udp@_http._tcp.dns-test-service.dns-4863.svc jessie_tcp@_http._tcp.dns-test-service.dns-4863.svc] May 11 19:00:45.590: INFO: DNS probes using dns-4863/dns-test-1e8c9189-5d24-4483-a270-1263423c39a3 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:00:46.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4863" for this suite. • [SLOW TEST:46.928 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":54,"skipped":767,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:00:46.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 11 19:00:47.323: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 19:00:47.576: INFO: Waiting for terminating namespaces to be deleted... May 11 19:00:47.579: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 11 19:00:47.584: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 11 19:00:47.584: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 11 19:00:47.584: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 11 19:00:47.584: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 11 19:00:47.584: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 19:00:47.584: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:00:47.584: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 19:00:47.584: INFO: Container kube-proxy ready: true, restart count 0 May 11 19:00:47.584: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 11 19:00:47.712: INFO: rally-251c0e11-xc5n384r from c-rally-251c0e11-prey2et2 started at 2020-05-11 19:00:07 +0000 UTC (1 container statuses recorded) May 11 19:00:47.712: INFO: Container rally-251c0e11-xc5n384r ready: true, restart count 0 May 11 19:00:47.712: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 11 19:00:47.712: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 11 19:00:47.712: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 19:00:47.712: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:00:47.712: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 19:00:47.712: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6df6b1c2-738f-4a94-99cc-35ee8c08bc4f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6df6b1c2-738f-4a94-99cc-35ee8c08bc4f off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6df6b1c2-738f-4a94-99cc-35ee8c08bc4f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:01:11.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4909" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:24.764 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":55,"skipped":770,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:01:11.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-90d1d14b-3d78-42d1-9fd0-bfa7586c7242 STEP: Creating a pod to test consume secrets May 11 19:01:12.433: INFO: Waiting up to 5m0s for pod "pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725" in namespace "secrets-1676" to be "Succeeded or Failed" May 11 19:01:12.451: INFO: Pod "pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725": Phase="Pending", Reason="", readiness=false. Elapsed: 18.241585ms May 11 19:01:14.463: INFO: Pod "pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029885108s May 11 19:01:16.467: INFO: Pod "pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034331291s May 11 19:01:18.944: INFO: Pod "pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725": Phase="Pending", Reason="", readiness=false. Elapsed: 6.511289464s May 11 19:01:21.346: INFO: Pod "pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.912681089s STEP: Saw pod success May 11 19:01:21.346: INFO: Pod "pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725" satisfied condition "Succeeded or Failed" May 11 19:01:21.349: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725 container secret-volume-test: STEP: delete the pod May 11 19:01:23.213: INFO: Waiting for pod pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725 to disappear May 11 19:01:23.267: INFO: Pod pod-secrets-fe27c0e7-fa85-4cb6-aaa9-c1d340819725 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:01:23.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1676" for this suite. • [SLOW TEST:12.124 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":56,"skipped":786,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:01:23.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:01:25.108: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:01:27.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820485, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820485, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820485, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820485, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:01:29.135: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820485, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820485, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820485, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820485, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:01:32.171: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:01:46.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5878" for this suite. STEP: Destroying namespace "webhook-5878-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.399 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":57,"skipped":794,"failed":0} [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:01:48.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-223635b7-b31e-4a12-a1e7-08491073668f in namespace container-probe-3043 May 11 19:01:55.299: INFO: Started pod liveness-223635b7-b31e-4a12-a1e7-08491073668f in namespace container-probe-3043 STEP: checking the pod's current state and verifying that restartCount is present May 11 19:01:55.304: INFO: Initial restart count of pod liveness-223635b7-b31e-4a12-a1e7-08491073668f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:05:58.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3043" for this suite. • [SLOW TEST:249.760 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":58,"skipped":794,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:05:58.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-8ffc5b90-dee4-4d0f-80f6-5930173dcf88 STEP: Creating a pod to test consume secrets May 11 19:05:58.543: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa" in namespace "projected-2909" to be "Succeeded or Failed" May 11 19:05:58.754: INFO: Pod "pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa": Phase="Pending", Reason="", readiness=false. Elapsed: 210.523126ms May 11 19:06:00.758: INFO: Pod "pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214314491s May 11 19:06:02.788: INFO: Pod "pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244490463s May 11 19:06:05.525: INFO: Pod "pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.98160303s May 11 19:06:07.718: INFO: Pod "pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa": Phase="Running", Reason="", readiness=true. Elapsed: 9.174600739s May 11 19:06:09.721: INFO: Pod "pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.178073859s STEP: Saw pod success May 11 19:06:09.721: INFO: Pod "pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa" satisfied condition "Succeeded or Failed" May 11 19:06:09.724: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa container projected-secret-volume-test: STEP: delete the pod May 11 19:06:10.654: INFO: Waiting for pod pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa to disappear May 11 19:06:10.719: INFO: Pod pod-projected-secrets-a63d4310-d5f4-4af7-9121-bf92a89e76fa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:06:10.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2909" for this suite. • [SLOW TEST:12.711 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":806,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:06:10.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d71cd61b-ce65-4afe-ae65-71200cd3934b STEP: Creating a pod to test consume configMaps May 11 19:06:11.301: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6" in namespace "projected-5631" to be "Succeeded or Failed" May 11 19:06:11.659: INFO: Pod "pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6": Phase="Pending", Reason="", readiness=false. Elapsed: 357.132617ms May 11 19:06:13.662: INFO: Pod "pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361051863s May 11 19:06:15.668: INFO: Pod "pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366826324s May 11 19:06:17.682: INFO: Pod "pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38006472s May 11 19:06:19.896: INFO: Pod "pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.594995558s STEP: Saw pod success May 11 19:06:19.896: INFO: Pod "pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6" satisfied condition "Succeeded or Failed" May 11 19:06:19.961: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6 container projected-configmap-volume-test: STEP: delete the pod May 11 19:06:20.557: INFO: Waiting for pod pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6 to disappear May 11 19:06:20.807: INFO: Pod pod-projected-configmaps-0e0512cc-9490-41af-bfc0-4ffeaf1127e6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:06:20.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5631" for this suite. • [SLOW TEST:10.086 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":60,"skipped":820,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:06:20.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 11 19:06:21.158: INFO: Waiting up to 5m0s for pod "var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7" in namespace "var-expansion-4958" to be "Succeeded or Failed" May 11 19:06:21.210: INFO: Pod "var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.38169ms May 11 19:06:23.226: INFO: Pod "var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067548994s May 11 19:06:25.640: INFO: Pod "var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481814717s May 11 19:06:27.939: INFO: Pod "var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.781031013s May 11 19:06:29.956: INFO: Pod "var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7": Phase="Running", Reason="", readiness=true. Elapsed: 8.797507064s May 11 19:06:32.010: INFO: Pod "var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.852282181s STEP: Saw pod success May 11 19:06:32.010: INFO: Pod "var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7" satisfied condition "Succeeded or Failed" May 11 19:06:32.013: INFO: Trying to get logs from node latest-worker2 pod var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7 container dapi-container: STEP: delete the pod May 11 19:06:32.032: INFO: Waiting for pod var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7 to disappear May 11 19:06:32.036: INFO: Pod var-expansion-f943f905-df7e-4de2-bc8d-f6a5f622c2d7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:06:32.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4958" for this suite. • [SLOW TEST:11.229 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":842,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:06:32.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:06:32.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da7275a5-f1b3-4e60-adf9-d6d1b1f2193f" in namespace "downward-api-5857" to be "Succeeded or Failed" May 11 19:06:32.843: INFO: Pod "downwardapi-volume-da7275a5-f1b3-4e60-adf9-d6d1b1f2193f": Phase="Pending", Reason="", readiness=false. Elapsed: 56.010403ms May 11 19:06:35.132: INFO: Pod "downwardapi-volume-da7275a5-f1b3-4e60-adf9-d6d1b1f2193f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.345193014s May 11 19:06:37.376: INFO: Pod "downwardapi-volume-da7275a5-f1b3-4e60-adf9-d6d1b1f2193f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.588794575s STEP: Saw pod success May 11 19:06:37.376: INFO: Pod "downwardapi-volume-da7275a5-f1b3-4e60-adf9-d6d1b1f2193f" satisfied condition "Succeeded or Failed" May 11 19:06:37.379: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-da7275a5-f1b3-4e60-adf9-d6d1b1f2193f container client-container: STEP: delete the pod May 11 19:06:37.895: INFO: Waiting for pod downwardapi-volume-da7275a5-f1b3-4e60-adf9-d6d1b1f2193f to disappear May 11 19:06:37.936: INFO: Pod downwardapi-volume-da7275a5-f1b3-4e60-adf9-d6d1b1f2193f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:06:37.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5857" for this suite. • [SLOW TEST:6.009 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":62,"skipped":854,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:06:38.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-4fg8 STEP: Creating a pod to test atomic-volume-subpath May 11 19:06:38.127: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4fg8" in namespace "subpath-5565" to be "Succeeded or Failed" May 11 19:06:38.191: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Pending", Reason="", readiness=false. Elapsed: 63.808769ms May 11 19:06:40.438: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310299186s May 11 19:06:42.441: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 4.313497511s May 11 19:06:44.444: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 6.316811556s May 11 19:06:46.449: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 8.321507242s May 11 19:06:48.453: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 10.325619817s May 11 19:06:50.456: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 12.328515209s May 11 19:06:52.536: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 14.408139506s May 11 19:06:54.543: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 16.415633334s May 11 19:06:56.605: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 18.477798598s May 11 19:06:58.610: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 20.482337467s May 11 19:07:00.614: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Running", Reason="", readiness=true. Elapsed: 22.486174542s May 11 19:07:02.623: INFO: Pod "pod-subpath-test-configmap-4fg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.496022188s STEP: Saw pod success May 11 19:07:02.623: INFO: Pod "pod-subpath-test-configmap-4fg8" satisfied condition "Succeeded or Failed" May 11 19:07:02.826: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-4fg8 container test-container-subpath-configmap-4fg8: STEP: delete the pod May 11 19:07:03.055: INFO: Waiting for pod pod-subpath-test-configmap-4fg8 to disappear May 11 19:07:03.172: INFO: Pod pod-subpath-test-configmap-4fg8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-4fg8 May 11 19:07:03.172: INFO: Deleting pod "pod-subpath-test-configmap-4fg8" in namespace "subpath-5565" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:07:03.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5565" for this suite. • [SLOW TEST:25.128 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":63,"skipped":854,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:07:03.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:07:03.234: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:07:05.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9703" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":64,"skipped":858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:07:05.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-2b8c0108-5dbb-465b-84f0-bfeeca6edb1d [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:07:07.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5831" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":65,"skipped":902,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:07:07.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-275ed973-e421-497e-ac08-48e28dc50a2a STEP: Creating a pod to test consume configMaps May 11 19:07:09.134: INFO: Waiting up to 5m0s for pod "pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89" in namespace "configmap-8780" to be "Succeeded or Failed" May 11 19:07:09.623: INFO: Pod "pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89": Phase="Pending", Reason="", readiness=false. Elapsed: 488.58875ms May 11 19:07:11.643: INFO: Pod "pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.508536538s May 11 19:07:13.647: INFO: Pod "pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.512640757s May 11 19:07:15.651: INFO: Pod "pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.516209088s STEP: Saw pod success May 11 19:07:15.651: INFO: Pod "pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89" satisfied condition "Succeeded or Failed" May 11 19:07:15.706: INFO: Trying to get logs from node latest-worker pod pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89 container configmap-volume-test: STEP: delete the pod May 11 19:07:15.742: INFO: Waiting for pod pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89 to disappear May 11 19:07:15.778: INFO: Pod pod-configmaps-33081f38-f476-476b-a21a-bb6b7048bd89 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:07:15.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8780" for this suite. • [SLOW TEST:8.419 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":66,"skipped":906,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:07:15.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:07:16.488: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:07:18.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820836, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820836, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820836, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820836, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:07:20.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820836, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820836, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820836, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820836, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:07:23.940: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:07:24.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2485" for this suite. STEP: Destroying namespace "webhook-2485-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.896 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":67,"skipped":928,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:07:24.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:07:25.033: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:07:33.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3195" for this suite. • [SLOW TEST:9.083 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":68,"skipped":940,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:07:33.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 19:07:34.557: INFO: Waiting up to 5m0s for pod "pod-07e59a69-a881-49c9-8232-e0a3ce989131" in namespace "emptydir-6458" to be "Succeeded or Failed" May 11 19:07:34.591: INFO: Pod "pod-07e59a69-a881-49c9-8232-e0a3ce989131": Phase="Pending", Reason="", readiness=false. Elapsed: 33.882136ms May 11 19:07:36.685: INFO: Pod "pod-07e59a69-a881-49c9-8232-e0a3ce989131": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127564404s May 11 19:07:38.937: INFO: Pod "pod-07e59a69-a881-49c9-8232-e0a3ce989131": Phase="Pending", Reason="", readiness=false. Elapsed: 4.379927282s May 11 19:07:41.395: INFO: Pod "pod-07e59a69-a881-49c9-8232-e0a3ce989131": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.837875493s STEP: Saw pod success May 11 19:07:41.395: INFO: Pod "pod-07e59a69-a881-49c9-8232-e0a3ce989131" satisfied condition "Succeeded or Failed" May 11 19:07:41.426: INFO: Trying to get logs from node latest-worker2 pod pod-07e59a69-a881-49c9-8232-e0a3ce989131 container test-container: STEP: delete the pod May 11 19:07:41.905: INFO: Waiting for pod pod-07e59a69-a881-49c9-8232-e0a3ce989131 to disappear May 11 19:07:42.414: INFO: Pod pod-07e59a69-a881-49c9-8232-e0a3ce989131 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:07:42.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6458" for this suite. • [SLOW TEST:9.016 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":69,"skipped":945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:07:42.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:07:43.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a" in namespace "downward-api-7303" to be "Succeeded or Failed" May 11 19:07:43.765: INFO: Pod "downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a": Phase="Pending", Reason="", readiness=false. Elapsed: 164.724681ms May 11 19:07:46.041: INFO: Pod "downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.441365969s May 11 19:07:48.340: INFO: Pod "downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.739487258s May 11 19:07:50.527: INFO: Pod "downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.927333925s STEP: Saw pod success May 11 19:07:50.527: INFO: Pod "downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a" satisfied condition "Succeeded or Failed" May 11 19:07:50.530: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a container client-container: STEP: delete the pod May 11 19:07:50.688: INFO: Waiting for pod downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a to disappear May 11 19:07:51.078: INFO: Pod downwardapi-volume-34ab9937-933e-4fcf-859b-39a7c8b4e79a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:07:51.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7303" for this suite. • [SLOW TEST:8.794 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":70,"skipped":975,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:07:51.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:07:53.260: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 11 19:07:53.493: INFO: Pod name sample-pod: Found 0 pods out of 1 May 11 19:07:58.869: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 19:08:05.705: INFO: Creating deployment "test-rolling-update-deployment" May 11 19:08:05.710: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 11 19:08:05.745: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 11 19:08:08.009: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 11 19:08:08.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820885, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820885, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820885, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820885, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:08:10.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820885, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820885, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820885, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724820885, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:08:12.959: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 19:08:13.252: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1231 /apis/apps/v1/namespaces/deployment-1231/deployments/test-rolling-update-deployment 0beca34c-9516-43af-a4b9-8c96ba3ea17f 3530241 1 2020-05-11 19:08:05 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-11 19:08:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 19:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d44208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 19:08:05 +0000 UTC,LastTransitionTime:2020-05-11 19:08:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-11 19:08:12 +0000 UTC,LastTransitionTime:2020-05-11 19:08:05 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 19:08:13.256: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-1231 /apis/apps/v1/namespaces/deployment-1231/replicasets/test-rolling-update-deployment-df7bb669b e3e28ed0-8f5b-43d4-a945-135c5950956f 3530227 1 2020-05-11 19:08:05 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 0beca34c-9516-43af-a4b9-8c96ba3ea17f 0xc002cb6940 0xc002cb6941}] [] [{kube-controller-manager Update apps/v1 2020-05-11 19:08:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0beca34c-9516-43af-a4b9-8c96ba3ea17f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cb69b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 19:08:13.256: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 11 19:08:13.257: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1231 /apis/apps/v1/namespaces/deployment-1231/replicasets/test-rolling-update-controller 934fae1d-5374-4eac-bc12-bd5572f273b8 3530239 2 2020-05-11 19:07:53 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 0beca34c-9516-43af-a4b9-8c96ba3ea17f 0xc002cb6807 0xc002cb6808}] [] [{e2e.test Update apps/v1 2020-05-11 19:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 19:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0beca34c-9516-43af-a4b9-8c96ba3ea17f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002cb68d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 19:08:13.261: INFO: Pod "test-rolling-update-deployment-df7bb669b-dzmjc" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-dzmjc test-rolling-update-deployment-df7bb669b- deployment-1231 /api/v1/namespaces/deployment-1231/pods/test-rolling-update-deployment-df7bb669b-dzmjc 50773ba1-2b9a-4a2b-ac39-80a95297c830 3530226 0 2020-05-11 19:08:05 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b e3e28ed0-8f5b-43d4-a945-135c5950956f 0xc002cb7100 0xc002cb7101}] [] [{kube-controller-manager Update v1 2020-05-11 19:08:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3e28ed0-8f5b-43d4-a945-135c5950956f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:08:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5mg8d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5mg8d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5mg8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:08:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:08:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:08:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.83,StartTime:2020-05-11 19:08:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:08:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://75aa78b5cfa51114194727ed048af1ab5181edf3712b36bab2957bed457f76dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:08:13.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1231" for this suite. • [SLOW TEST:21.688 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":71,"skipped":981,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:08:13.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 11 19:08:26.323: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3612 PodName:pod-sharedvolume-28004480-bbff-4ed3-8795-3d07416262b2 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:08:26.323: INFO: >>> kubeConfig: /root/.kube/config I0511 19:08:26.363753 7 log.go:172] (0xc0028a7ef0) (0xc0012dcc80) Create stream I0511 19:08:26.363787 7 log.go:172] (0xc0028a7ef0) (0xc0012dcc80) Stream added, broadcasting: 1 I0511 19:08:26.366170 7 log.go:172] (0xc0028a7ef0) Reply frame received for 1 I0511 19:08:26.366253 7 log.go:172] (0xc0028a7ef0) (0xc00129ef00) Create stream I0511 19:08:26.366281 7 log.go:172] (0xc0028a7ef0) (0xc00129ef00) Stream added, broadcasting: 3 I0511 19:08:26.367219 7 log.go:172] (0xc0028a7ef0) Reply frame received for 3 I0511 19:08:26.367255 7 log.go:172] (0xc0028a7ef0) (0xc00129efa0) Create stream I0511 19:08:26.367266 7 log.go:172] (0xc0028a7ef0) (0xc00129efa0) Stream added, broadcasting: 5 I0511 19:08:26.368060 7 log.go:172] (0xc0028a7ef0) Reply frame received for 5 I0511 19:08:26.425761 7 log.go:172] (0xc0028a7ef0) Data frame received for 3 I0511 19:08:26.425806 7 log.go:172] (0xc00129ef00) (3) Data frame handling I0511 19:08:26.425840 7 log.go:172] (0xc00129ef00) (3) Data frame sent I0511 19:08:26.426356 7 log.go:172] (0xc0028a7ef0) Data frame received for 5 I0511 19:08:26.426387 7 log.go:172] (0xc0028a7ef0) Data frame received for 3 I0511 19:08:26.426434 7 log.go:172] (0xc00129ef00) (3) Data frame handling I0511 19:08:26.426472 7 log.go:172] (0xc00129efa0) (5) Data frame handling I0511 19:08:26.428272 7 log.go:172] (0xc0028a7ef0) Data frame received for 1 I0511 19:08:26.428301 7 log.go:172] (0xc0012dcc80) (1) Data frame handling I0511 19:08:26.428320 7 log.go:172] (0xc0012dcc80) (1) Data frame sent I0511 19:08:26.428358 7 log.go:172] (0xc0028a7ef0) (0xc0012dcc80) Stream removed, broadcasting: 1 I0511 19:08:26.428392 7 log.go:172] (0xc0028a7ef0) Go away received I0511 19:08:26.428789 7 log.go:172] (0xc0028a7ef0) (0xc0012dcc80) Stream removed, broadcasting: 1 I0511 19:08:26.428810 7 log.go:172] (0xc0028a7ef0) (0xc00129ef00) Stream removed, broadcasting: 3 I0511 19:08:26.428820 7 log.go:172] (0xc0028a7ef0) (0xc00129efa0) Stream removed, broadcasting: 5 May 11 19:08:26.428: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:08:26.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3612" for this suite. • [SLOW TEST:13.169 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":72,"skipped":985,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:08:26.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:08:26.673: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 11 19:08:26.906: INFO: Number of nodes with available pods: 0 May 11 19:08:26.906: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 11 19:08:27.059: INFO: Number of nodes with available pods: 0 May 11 19:08:27.059: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:28.064: INFO: Number of nodes with available pods: 0 May 11 19:08:28.064: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:29.360: INFO: Number of nodes with available pods: 0 May 11 19:08:29.360: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:30.064: INFO: Number of nodes with available pods: 0 May 11 19:08:30.064: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:31.072: INFO: Number of nodes with available pods: 0 May 11 19:08:31.072: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:33.243: INFO: Number of nodes with available pods: 0 May 11 19:08:33.243: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:34.798: INFO: Number of nodes with available pods: 0 May 11 19:08:34.799: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:36.504: INFO: Number of nodes with available pods: 0 May 11 19:08:36.504: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:37.222: INFO: Number of nodes with available pods: 0 May 11 19:08:37.222: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:39.444: INFO: Number of nodes with available pods: 1 May 11 19:08:39.444: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 11 19:08:42.607: INFO: Number of nodes with available pods: 1 May 11 19:08:42.607: INFO: Number of running nodes: 0, number of available pods: 1 May 11 19:08:44.072: INFO: Number of nodes with available pods: 0 May 11 19:08:44.072: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 11 19:08:44.497: INFO: Number of nodes with available pods: 0 May 11 19:08:44.497: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:45.822: INFO: Number of nodes with available pods: 0 May 11 19:08:45.822: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:46.696: INFO: Number of nodes with available pods: 0 May 11 19:08:46.696: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:47.692: INFO: Number of nodes with available pods: 0 May 11 19:08:47.692: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:48.911: INFO: Number of nodes with available pods: 0 May 11 19:08:48.911: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:49.757: INFO: Number of nodes with available pods: 0 May 11 19:08:49.757: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:50.593: INFO: Number of nodes with available pods: 0 May 11 19:08:50.593: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:51.605: INFO: Number of nodes with available pods: 0 May 11 19:08:51.605: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:52.612: INFO: Number of nodes with available pods: 0 May 11 19:08:52.612: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:54.144: INFO: Number of nodes with available pods: 0 May 11 19:08:54.144: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:54.784: INFO: Number of nodes with available pods: 0 May 11 19:08:54.784: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:55.502: INFO: Number of nodes with available pods: 0 May 11 19:08:55.502: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:08:56.501: INFO: Number of nodes with available pods: 1 May 11 19:08:56.501: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7909, will wait for the garbage collector to delete the pods May 11 19:08:56.739: INFO: Deleting DaemonSet.extensions daemon-set took: 179.850153ms May 11 19:08:57.039: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.251675ms May 11 19:09:06.228: INFO: Number of nodes with available pods: 0 May 11 19:09:06.228: INFO: Number of running nodes: 0, number of available pods: 0 May 11 19:09:06.491: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7909/daemonsets","resourceVersion":"3530528"},"items":null} May 11 19:09:06.494: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7909/pods","resourceVersion":"3530528"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:09:07.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7909" for this suite. • [SLOW TEST:41.796 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":73,"skipped":993,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:09:08.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:09:16.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9390" for this suite. • [SLOW TEST:8.142 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":74,"skipped":1012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:09:16.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 11 19:09:17.084: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9261 /api/v1/namespaces/watch-9261/configmaps/e2e-watch-test-watch-closed 5cf7f832-c591-4d6b-adea-b934f347166e 3530618 0 2020-05-11 19:09:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-11 19:09:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 19:09:17.085: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9261 /api/v1/namespaces/watch-9261/configmaps/e2e-watch-test-watch-closed 5cf7f832-c591-4d6b-adea-b934f347166e 3530619 0 2020-05-11 19:09:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-11 19:09:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 11 19:09:18.762: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9261 /api/v1/namespaces/watch-9261/configmaps/e2e-watch-test-watch-closed 5cf7f832-c591-4d6b-adea-b934f347166e 3530620 0 2020-05-11 19:09:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-11 19:09:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 19:09:18.762: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9261 /api/v1/namespaces/watch-9261/configmaps/e2e-watch-test-watch-closed 5cf7f832-c591-4d6b-adea-b934f347166e 3530622 0 2020-05-11 19:09:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-11 19:09:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:09:18.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9261" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":75,"skipped":1041,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:09:19.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 19:09:19.922: INFO: Waiting up to 5m0s for pod "pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb" in namespace "emptydir-3177" to be "Succeeded or Failed" May 11 19:09:20.066: INFO: Pod "pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.673948ms May 11 19:09:22.401: INFO: Pod "pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479071707s May 11 19:09:24.539: INFO: Pod "pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.616381846s May 11 19:09:26.695: INFO: Pod "pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb": Phase="Running", Reason="", readiness=true. Elapsed: 6.772389137s May 11 19:09:28.701: INFO: Pod "pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.778239318s STEP: Saw pod success May 11 19:09:28.701: INFO: Pod "pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb" satisfied condition "Succeeded or Failed" May 11 19:09:28.704: INFO: Trying to get logs from node latest-worker2 pod pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb container test-container: STEP: delete the pod May 11 19:09:28.937: INFO: Waiting for pod pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb to disappear May 11 19:09:29.162: INFO: Pod pod-37dd98ae-e5d0-40bc-b425-35dce193d8bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:09:29.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3177" for this suite. • [SLOW TEST:9.959 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":76,"skipped":1079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:09:29.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:09:47.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5286" for this suite. • [SLOW TEST:18.970 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":77,"skipped":1143,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:09:48.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 11 19:09:58.901: INFO: 10 pods remaining May 11 19:09:58.901: INFO: 0 pods has nil DeletionTimestamp May 11 19:09:58.901: INFO: May 11 19:10:01.092: INFO: 0 pods remaining May 11 19:10:01.092: INFO: 0 pods has nil DeletionTimestamp May 11 19:10:01.092: INFO: May 11 19:10:02.223: INFO: 0 pods remaining May 11 19:10:02.223: INFO: 0 pods has nil DeletionTimestamp May 11 19:10:02.223: INFO: STEP: Gathering metrics W0511 19:10:04.271715 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 19:10:04.271: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:10:04.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7454" for this suite. • [SLOW TEST:17.240 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":78,"skipped":1145,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:10:05.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 11 19:10:05.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5029' May 11 19:10:15.218: INFO: stderr: "" May 11 19:10:15.218: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 19:10:15.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029' May 11 19:10:15.407: INFO: stderr: "" May 11 19:10:15.407: INFO: stdout: "update-demo-nautilus-6z9bh update-demo-nautilus-csdz6 " May 11 19:10:15.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:15.575: INFO: stderr: "" May 11 19:10:15.575: INFO: stdout: "" May 11 19:10:15.575: INFO: update-demo-nautilus-6z9bh is created but not running May 11 19:10:20.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029' May 11 19:10:20.682: INFO: stderr: "" May 11 19:10:20.682: INFO: stdout: "update-demo-nautilus-6z9bh update-demo-nautilus-csdz6 " May 11 19:10:20.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:20.781: INFO: stderr: "" May 11 19:10:20.781: INFO: stdout: "" May 11 19:10:20.781: INFO: update-demo-nautilus-6z9bh is created but not running May 11 19:10:25.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029' May 11 19:10:26.276: INFO: stderr: "" May 11 19:10:26.276: INFO: stdout: "update-demo-nautilus-6z9bh update-demo-nautilus-csdz6 " May 11 19:10:26.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:26.546: INFO: stderr: "" May 11 19:10:26.546: INFO: stdout: "true" May 11 19:10:26.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z9bh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:26.821: INFO: stderr: "" May 11 19:10:26.821: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 19:10:26.821: INFO: validating pod update-demo-nautilus-6z9bh May 11 19:10:26.825: INFO: got data: { "image": "nautilus.jpg" } May 11 19:10:26.825: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 19:10:26.825: INFO: update-demo-nautilus-6z9bh is verified up and running May 11 19:10:26.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-csdz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:26.929: INFO: stderr: "" May 11 19:10:26.929: INFO: stdout: "true" May 11 19:10:26.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-csdz6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:27.230: INFO: stderr: "" May 11 19:10:27.230: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 19:10:27.230: INFO: validating pod update-demo-nautilus-csdz6 May 11 19:10:27.234: INFO: got data: { "image": "nautilus.jpg" } May 11 19:10:27.234: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 19:10:27.234: INFO: update-demo-nautilus-csdz6 is verified up and running STEP: scaling down the replication controller May 11 19:10:27.236: INFO: scanned /root for discovery docs: May 11 19:10:27.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5029' May 11 19:10:28.555: INFO: stderr: "" May 11 19:10:28.555: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 19:10:28.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029' May 11 19:10:28.742: INFO: stderr: "" May 11 19:10:28.742: INFO: stdout: "update-demo-nautilus-6z9bh update-demo-nautilus-csdz6 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 19:10:33.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029' May 11 19:10:33.843: INFO: stderr: "" May 11 19:10:33.843: INFO: stdout: "update-demo-nautilus-6z9bh " May 11 19:10:33.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:34.008: INFO: stderr: "" May 11 19:10:34.008: INFO: stdout: "true" May 11 19:10:34.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z9bh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:34.376: INFO: stderr: "" May 11 19:10:34.376: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 19:10:34.376: INFO: validating pod update-demo-nautilus-6z9bh May 11 19:10:34.451: INFO: got data: { "image": "nautilus.jpg" } May 11 19:10:34.451: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 19:10:34.451: INFO: update-demo-nautilus-6z9bh is verified up and running STEP: scaling up the replication controller May 11 19:10:34.455: INFO: scanned /root for discovery docs: May 11 19:10:34.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5029' May 11 19:10:35.959: INFO: stderr: "" May 11 19:10:35.959: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 19:10:35.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029' May 11 19:10:36.085: INFO: stderr: "" May 11 19:10:36.085: INFO: stdout: "update-demo-nautilus-4fq2d update-demo-nautilus-6z9bh " May 11 19:10:36.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fq2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:36.182: INFO: stderr: "" May 11 19:10:36.183: INFO: stdout: "" May 11 19:10:36.183: INFO: update-demo-nautilus-4fq2d is created but not running May 11 19:10:41.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029' May 11 19:10:41.767: INFO: stderr: "" May 11 19:10:41.767: INFO: stdout: "update-demo-nautilus-4fq2d update-demo-nautilus-6z9bh " May 11 19:10:41.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fq2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:42.058: INFO: stderr: "" May 11 19:10:42.058: INFO: stdout: "" May 11 19:10:42.058: INFO: update-demo-nautilus-4fq2d is created but not running May 11 19:10:47.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029' May 11 19:10:47.166: INFO: stderr: "" May 11 19:10:47.166: INFO: stdout: "update-demo-nautilus-4fq2d update-demo-nautilus-6z9bh " May 11 19:10:47.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fq2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:47.270: INFO: stderr: "" May 11 19:10:47.270: INFO: stdout: "true" May 11 19:10:47.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fq2d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:47.375: INFO: stderr: "" May 11 19:10:47.375: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 19:10:47.375: INFO: validating pod update-demo-nautilus-4fq2d May 11 19:10:47.378: INFO: got data: { "image": "nautilus.jpg" } May 11 19:10:47.378: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 19:10:47.378: INFO: update-demo-nautilus-4fq2d is verified up and running May 11 19:10:47.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:47.523: INFO: stderr: "" May 11 19:10:47.523: INFO: stdout: "true" May 11 19:10:47.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z9bh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029' May 11 19:10:47.617: INFO: stderr: "" May 11 19:10:47.618: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 19:10:47.618: INFO: validating pod update-demo-nautilus-6z9bh May 11 19:10:47.620: INFO: got data: { "image": "nautilus.jpg" } May 11 19:10:47.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 19:10:47.620: INFO: update-demo-nautilus-6z9bh is verified up and running STEP: using delete to clean up resources May 11 19:10:47.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5029' May 11 19:10:47.730: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 19:10:47.730: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 19:10:47.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5029' May 11 19:10:48.323: INFO: stderr: "No resources found in kubectl-5029 namespace.\n" May 11 19:10:48.323: INFO: stdout: "" May 11 19:10:48.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5029 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 19:10:48.643: INFO: stderr: "" May 11 19:10:48.643: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:10:48.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5029" for this suite. • [SLOW TEST:43.360 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":79,"skipped":1152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:10:48.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7032, will wait for the garbage collector to delete the pods May 11 19:10:56.334: INFO: Deleting Job.batch foo took: 201.362052ms May 11 19:10:57.134: INFO: Terminating Job.batch foo pods took: 800.261668ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:11:35.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7032" for this suite. • [SLOW TEST:46.906 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":80,"skipped":1192,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:11:35.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:11:44.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3329" for this suite. • [SLOW TEST:8.640 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":81,"skipped":1205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:11:44.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9376 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-9376 May 11 19:11:44.546: INFO: Found 0 stateful pods, waiting for 1 May 11 19:11:54.551: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 19:11:54.766: INFO: Deleting all statefulset in ns statefulset-9376 May 11 19:11:54.878: INFO: Scaling statefulset ss to 0 May 11 19:12:15.172: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:12:15.175: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:12:15.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9376" for this suite. • [SLOW TEST:30.944 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":82,"skipped":1258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:12:15.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:12:20.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-934" for this suite. • [SLOW TEST:5.307 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":83,"skipped":1292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:12:20.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 11 19:12:20.687: INFO: Waiting up to 5m0s for pod "client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c" in namespace "containers-3075" to be "Succeeded or Failed" May 11 19:12:20.716: INFO: Pod "client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.225003ms May 11 19:12:22.894: INFO: Pod "client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207533874s May 11 19:12:27.009: INFO: Pod "client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321644454s May 11 19:12:29.011: INFO: Pod "client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c": Phase="Running", Reason="", readiness=true. Elapsed: 8.324264605s May 11 19:12:31.144: INFO: Pod "client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.456701987s STEP: Saw pod success May 11 19:12:31.144: INFO: Pod "client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c" satisfied condition "Succeeded or Failed" May 11 19:12:31.601: INFO: Trying to get logs from node latest-worker2 pod client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c container test-container: STEP: delete the pod May 11 19:12:31.916: INFO: Waiting for pod client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c to disappear May 11 19:12:31.922: INFO: Pod client-containers-8ab3245c-b3e2-4094-972e-ecafff29b27c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:12:31.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3075" for this suite. • [SLOW TEST:11.609 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":84,"skipped":1327,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:12:32.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 11 19:12:32.796: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:12:55.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9259" for this suite. • [SLOW TEST:23.193 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1340,"failed":0} S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:12:55.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:12:55.622: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 11 19:12:58.589: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:12:58.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2891" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":86,"skipped":1341,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:12:58.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 11 19:13:00.591: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix583563863/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:13:00.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3354" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":87,"skipped":1354,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:13:01.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-c3de7cd0-4e6f-4b63-8d7f-08fdf75666c5 STEP: Creating a pod to test consume configMaps May 11 19:13:03.330: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426" in namespace "configmap-6789" to be "Succeeded or Failed" May 11 19:13:03.830: INFO: Pod "pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426": Phase="Pending", Reason="", readiness=false. Elapsed: 500.690805ms May 11 19:13:06.081: INFO: Pod "pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.751694785s May 11 19:13:08.235: INFO: Pod "pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426": Phase="Pending", Reason="", readiness=false. Elapsed: 4.905500541s May 11 19:13:10.541: INFO: Pod "pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426": Phase="Pending", Reason="", readiness=false. Elapsed: 7.211760753s May 11 19:13:12.900: INFO: Pod "pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.570812664s STEP: Saw pod success May 11 19:13:12.901: INFO: Pod "pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426" satisfied condition "Succeeded or Failed" May 11 19:13:12.904: INFO: Trying to get logs from node latest-worker pod pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426 container configmap-volume-test: STEP: delete the pod May 11 19:13:14.098: INFO: Waiting for pod pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426 to disappear May 11 19:13:14.133: INFO: Pod pod-configmaps-bd02cb39-28ef-414a-ae55-c841c0a74426 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:13:14.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6789" for this suite. • [SLOW TEST:12.987 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1364,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:13:14.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:13:15.450: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 11 19:13:17.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:13:19.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:13:22.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821195, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:13:24.693: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:13:36.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1396" for this suite. STEP: Destroying namespace "webhook-1396-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.593 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":89,"skipped":1370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:13:37.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1c4acaa3-57fb-4d34-956a-121fb7eba04b STEP: Creating a pod to test consume secrets May 11 19:13:38.627: INFO: Waiting up to 5m0s for pod "pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce" in namespace "secrets-1635" to be "Succeeded or Failed" May 11 19:13:38.631: INFO: Pod "pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062775ms May 11 19:13:41.326: INFO: Pod "pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.699041142s May 11 19:13:43.523: INFO: Pod "pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce": Phase="Running", Reason="", readiness=true. Elapsed: 4.896472534s May 11 19:13:45.528: INFO: Pod "pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.901052233s STEP: Saw pod success May 11 19:13:45.528: INFO: Pod "pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce" satisfied condition "Succeeded or Failed" May 11 19:13:45.531: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce container secret-volume-test: STEP: delete the pod May 11 19:13:46.124: INFO: Waiting for pod pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce to disappear May 11 19:13:46.499: INFO: Pod pod-secrets-5b886b98-6516-4368-8103-2a5e9fc32cce no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:13:46.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1635" for this suite. • [SLOW TEST:9.186 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":90,"skipped":1405,"failed":0} S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:13:46.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-4865 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4865 to expose endpoints map[] May 11 19:13:48.422: INFO: Get endpoints failed (8.378979ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 11 19:13:49.535: INFO: successfully validated that service endpoint-test2 in namespace services-4865 exposes endpoints map[] (1.12082309s elapsed) STEP: Creating pod pod1 in namespace services-4865 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4865 to expose endpoints map[pod1:[80]] May 11 19:13:56.008: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (6.466596508s elapsed, will retry) May 11 19:13:58.855: INFO: successfully validated that service endpoint-test2 in namespace services-4865 exposes endpoints map[pod1:[80]] (9.312951324s elapsed) STEP: Creating pod pod2 in namespace services-4865 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4865 to expose endpoints map[pod1:[80] pod2:[80]] May 11 19:14:03.931: INFO: successfully validated that service endpoint-test2 in namespace services-4865 exposes endpoints map[pod1:[80] pod2:[80]] (4.90600975s elapsed) STEP: Deleting pod pod1 in namespace services-4865 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4865 to expose endpoints map[pod2:[80]] May 11 19:14:04.053: INFO: successfully validated that service endpoint-test2 in namespace services-4865 exposes endpoints map[pod2:[80]] (117.684093ms elapsed) STEP: Deleting pod pod2 in namespace services-4865 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4865 to expose endpoints map[] May 11 19:14:05.602: INFO: successfully validated that service endpoint-test2 in namespace services-4865 exposes endpoints map[] (1.54522707s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:14:06.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4865" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.952 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":91,"skipped":1406,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:14:06.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 11 19:14:07.267: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6016" to be "Succeeded or Failed" May 11 19:14:07.286: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.013084ms May 11 19:14:09.298: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030369996s May 11 19:14:11.357: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089477891s May 11 19:14:13.540: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.272217511s May 11 19:14:15.544: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.276551998s STEP: Saw pod success May 11 19:14:15.544: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 11 19:14:15.547: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 11 19:14:15.633: INFO: Waiting for pod pod-host-path-test to disappear May 11 19:14:15.644: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:14:15.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6016" for this suite. • [SLOW TEST:8.756 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1422,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:14:15.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-493532be-0550-40f4-9a7c-70b7a84aa06d STEP: Creating a pod to test consume secrets May 11 19:14:15.915: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1" in namespace "projected-6406" to be "Succeeded or Failed" May 11 19:14:15.971: INFO: Pod "pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1": Phase="Pending", Reason="", readiness=false. Elapsed: 56.175187ms May 11 19:14:18.181: INFO: Pod "pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265633824s May 11 19:14:20.194: INFO: Pod "pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278551512s May 11 19:14:22.198: INFO: Pod "pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.282912877s STEP: Saw pod success May 11 19:14:22.198: INFO: Pod "pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1" satisfied condition "Succeeded or Failed" May 11 19:14:22.201: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1 container projected-secret-volume-test: STEP: delete the pod May 11 19:14:22.363: INFO: Waiting for pod pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1 to disappear May 11 19:14:22.400: INFO: Pod pod-projected-secrets-6cd68dc1-9817-4af4-93fc-5048fb92a6a1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:14:22.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6406" for this suite. • [SLOW TEST:6.756 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1422,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:14:22.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:14:23.356: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:14:25.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821263, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821263, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821263, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821263, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:14:28.996: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:14:30.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6093" for this suite. STEP: Destroying namespace "webhook-6093-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.010 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":94,"skipped":1430,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:14:31.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 11 19:14:35.812: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-8428 PodName:var-expansion-876d40f0-3f01-4948-9d4b-e4154bdb0725 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:14:35.812: INFO: >>> kubeConfig: /root/.kube/config I0511 19:14:35.846254 7 log.go:172] (0xc00303ab00) (0xc001af3f40) Create stream I0511 19:14:35.846317 7 log.go:172] (0xc00303ab00) (0xc001af3f40) Stream added, broadcasting: 1 I0511 19:14:35.848010 7 log.go:172] (0xc00303ab00) Reply frame received for 1 I0511 19:14:35.848047 7 log.go:172] (0xc00303ab00) (0xc002afa000) Create stream I0511 19:14:35.848059 7 log.go:172] (0xc00303ab00) (0xc002afa000) Stream added, broadcasting: 3 I0511 19:14:35.848870 7 log.go:172] (0xc00303ab00) Reply frame received for 3 I0511 19:14:35.848897 7 log.go:172] (0xc00303ab00) (0xc0014e7360) Create stream I0511 19:14:35.848907 7 log.go:172] (0xc00303ab00) (0xc0014e7360) Stream added, broadcasting: 5 I0511 19:14:35.849856 7 log.go:172] (0xc00303ab00) Reply frame received for 5 I0511 19:14:35.905636 7 log.go:172] (0xc00303ab00) Data frame received for 3 I0511 19:14:35.905666 7 log.go:172] (0xc002afa000) (3) Data frame handling I0511 19:14:35.905749 7 log.go:172] (0xc00303ab00) Data frame received for 5 I0511 19:14:35.905768 7 log.go:172] (0xc0014e7360) (5) Data frame handling I0511 19:14:35.906796 7 log.go:172] (0xc00303ab00) Data frame received for 1 I0511 19:14:35.906811 7 log.go:172] (0xc001af3f40) (1) Data frame handling I0511 19:14:35.906817 7 log.go:172] (0xc001af3f40) (1) Data frame sent I0511 19:14:35.906880 7 log.go:172] (0xc00303ab00) (0xc001af3f40) Stream removed, broadcasting: 1 I0511 19:14:35.906929 7 log.go:172] (0xc00303ab00) (0xc001af3f40) Stream removed, broadcasting: 1 I0511 19:14:35.906940 7 log.go:172] (0xc00303ab00) (0xc002afa000) Stream removed, broadcasting: 3 I0511 19:14:35.906953 7 log.go:172] (0xc00303ab00) Go away received I0511 19:14:35.906990 7 log.go:172] (0xc00303ab00) (0xc0014e7360) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 11 19:14:35.908: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-8428 PodName:var-expansion-876d40f0-3f01-4948-9d4b-e4154bdb0725 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:14:35.908: INFO: >>> kubeConfig: /root/.kube/config I0511 19:14:35.938006 7 log.go:172] (0xc00291ebb0) (0xc0014e7d60) Create stream I0511 19:14:35.938041 7 log.go:172] (0xc00291ebb0) (0xc0014e7d60) Stream added, broadcasting: 1 I0511 19:14:35.939824 7 log.go:172] (0xc00291ebb0) Reply frame received for 1 I0511 19:14:35.939864 7 log.go:172] (0xc00291ebb0) (0xc0014e7e00) Create stream I0511 19:14:35.939874 7 log.go:172] (0xc00291ebb0) (0xc0014e7e00) Stream added, broadcasting: 3 I0511 19:14:35.940569 7 log.go:172] (0xc00291ebb0) Reply frame received for 3 I0511 19:14:35.940605 7 log.go:172] (0xc00291ebb0) (0xc000e280a0) Create stream I0511 19:14:35.940622 7 log.go:172] (0xc00291ebb0) (0xc000e280a0) Stream added, broadcasting: 5 I0511 19:14:35.941694 7 log.go:172] (0xc00291ebb0) Reply frame received for 5 I0511 19:14:35.990065 7 log.go:172] (0xc00291ebb0) Data frame received for 3 I0511 19:14:35.990085 7 log.go:172] (0xc0014e7e00) (3) Data frame handling I0511 19:14:35.990137 7 log.go:172] (0xc00291ebb0) Data frame received for 5 I0511 19:14:35.990177 7 log.go:172] (0xc000e280a0) (5) Data frame handling I0511 19:14:35.991851 7 log.go:172] (0xc00291ebb0) Data frame received for 1 I0511 19:14:35.991865 7 log.go:172] (0xc0014e7d60) (1) Data frame handling I0511 19:14:35.991875 7 log.go:172] (0xc0014e7d60) (1) Data frame sent I0511 19:14:35.991926 7 log.go:172] (0xc00291ebb0) (0xc0014e7d60) Stream removed, broadcasting: 1 I0511 19:14:35.992003 7 log.go:172] (0xc00291ebb0) (0xc0014e7d60) Stream removed, broadcasting: 1 I0511 19:14:35.992037 7 log.go:172] (0xc00291ebb0) (0xc0014e7e00) Stream removed, broadcasting: 3 I0511 19:14:35.992125 7 log.go:172] (0xc00291ebb0) (0xc000e280a0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 11 19:14:36.514: INFO: Successfully updated pod "var-expansion-876d40f0-3f01-4948-9d4b-e4154bdb0725" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 11 19:14:36.570: INFO: Deleting pod "var-expansion-876d40f0-3f01-4948-9d4b-e4154bdb0725" in namespace "var-expansion-8428" May 11 19:14:36.587: INFO: Wait up to 5m0s for pod "var-expansion-876d40f0-3f01-4948-9d4b-e4154bdb0725" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:15:16.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8428" for this suite. • [SLOW TEST:45.244 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":95,"skipped":1443,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:15:16.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:15:29.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3316" for this suite. • [SLOW TEST:12.555 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":96,"skipped":1445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:15:29.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 11 19:15:29.320: INFO: Waiting up to 5m0s for pod "pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25" in namespace "emptydir-5056" to be "Succeeded or Failed" May 11 19:15:29.393: INFO: Pod "pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25": Phase="Pending", Reason="", readiness=false. Elapsed: 73.046146ms May 11 19:15:31.399: INFO: Pod "pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07858856s May 11 19:15:33.403: INFO: Pod "pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25": Phase="Running", Reason="", readiness=true. Elapsed: 4.082731089s May 11 19:15:35.464: INFO: Pod "pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.144351323s STEP: Saw pod success May 11 19:15:35.464: INFO: Pod "pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25" satisfied condition "Succeeded or Failed" May 11 19:15:35.468: INFO: Trying to get logs from node latest-worker2 pod pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25 container test-container: STEP: delete the pod May 11 19:15:35.609: INFO: Waiting for pod pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25 to disappear May 11 19:15:35.618: INFO: Pod pod-aa443a9e-fc82-4de9-8f6a-21103cee1e25 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:15:35.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5056" for this suite. • [SLOW TEST:6.408 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":97,"skipped":1473,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:15:35.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9763 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 19:15:35.778: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 11 19:15:35.947: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 19:15:38.207: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 19:15:40.100: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 19:15:41.951: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:15:43.951: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:15:45.951: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:15:47.951: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:15:49.951: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:15:52.069: INFO: The status of Pod netserver-0 is Running (Ready = true) May 11 19:15:52.100: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 11 19:15:58.513: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.102 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9763 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:15:58.513: INFO: >>> kubeConfig: /root/.kube/config I0511 19:15:58.542672 7 log.go:172] (0xc00294e420) (0xc0012dd900) Create stream I0511 19:15:58.542698 7 log.go:172] (0xc00294e420) (0xc0012dd900) Stream added, broadcasting: 1 I0511 19:15:58.544064 7 log.go:172] (0xc00294e420) Reply frame received for 1 I0511 19:15:58.544085 7 log.go:172] (0xc00294e420) (0xc0014e60a0) Create stream I0511 19:15:58.544093 7 log.go:172] (0xc00294e420) (0xc0014e60a0) Stream added, broadcasting: 3 I0511 19:15:58.544654 7 log.go:172] (0xc00294e420) Reply frame received for 3 I0511 19:15:58.544671 7 log.go:172] (0xc00294e420) (0xc0012dd9a0) Create stream I0511 19:15:58.544677 7 log.go:172] (0xc00294e420) (0xc0012dd9a0) Stream added, broadcasting: 5 I0511 19:15:58.545339 7 log.go:172] (0xc00294e420) Reply frame received for 5 I0511 19:15:59.596436 7 log.go:172] (0xc00294e420) Data frame received for 3 I0511 19:15:59.596477 7 log.go:172] (0xc0014e60a0) (3) Data frame handling I0511 19:15:59.596498 7 log.go:172] (0xc0014e60a0) (3) Data frame sent I0511 19:15:59.596528 7 log.go:172] (0xc00294e420) Data frame received for 3 I0511 19:15:59.596556 7 log.go:172] (0xc0014e60a0) (3) Data frame handling I0511 19:15:59.596572 7 log.go:172] (0xc00294e420) Data frame received for 5 I0511 19:15:59.596588 7 log.go:172] (0xc0012dd9a0) (5) Data frame handling I0511 19:15:59.598314 7 log.go:172] (0xc00294e420) Data frame received for 1 I0511 19:15:59.598363 7 log.go:172] (0xc0012dd900) (1) Data frame handling I0511 19:15:59.598412 7 log.go:172] (0xc0012dd900) (1) Data frame sent I0511 19:15:59.598433 7 log.go:172] (0xc00294e420) (0xc0012dd900) Stream removed, broadcasting: 1 I0511 19:15:59.598453 7 log.go:172] (0xc00294e420) Go away received I0511 19:15:59.598568 7 log.go:172] (0xc00294e420) (0xc0012dd900) Stream removed, broadcasting: 1 I0511 19:15:59.598604 7 log.go:172] (0xc00294e420) (0xc0014e60a0) Stream removed, broadcasting: 3 I0511 19:15:59.598632 7 log.go:172] (0xc00294e420) (0xc0012dd9a0) Stream removed, broadcasting: 5 May 11 19:15:59.598: INFO: Found all expected endpoints: [netserver-0] May 11 19:15:59.601: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.165 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9763 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:15:59.601: INFO: >>> kubeConfig: /root/.kube/config I0511 19:15:59.625617 7 log.go:172] (0xc002ea6420) (0xc000b6dd60) Create stream I0511 19:15:59.625645 7 log.go:172] (0xc002ea6420) (0xc000b6dd60) Stream added, broadcasting: 1 I0511 19:15:59.626983 7 log.go:172] (0xc002ea6420) Reply frame received for 1 I0511 19:15:59.627024 7 log.go:172] (0xc002ea6420) (0xc0012ddae0) Create stream I0511 19:15:59.627038 7 log.go:172] (0xc002ea6420) (0xc0012ddae0) Stream added, broadcasting: 3 I0511 19:15:59.627791 7 log.go:172] (0xc002ea6420) Reply frame received for 3 I0511 19:15:59.627812 7 log.go:172] (0xc002ea6420) (0xc0012ddc20) Create stream I0511 19:15:59.627823 7 log.go:172] (0xc002ea6420) (0xc0012ddc20) Stream added, broadcasting: 5 I0511 19:15:59.628465 7 log.go:172] (0xc002ea6420) Reply frame received for 5 I0511 19:16:00.698614 7 log.go:172] (0xc002ea6420) Data frame received for 3 I0511 19:16:00.698660 7 log.go:172] (0xc0012ddae0) (3) Data frame handling I0511 19:16:00.698680 7 log.go:172] (0xc0012ddae0) (3) Data frame sent I0511 19:16:00.698751 7 log.go:172] (0xc002ea6420) Data frame received for 5 I0511 19:16:00.698782 7 log.go:172] (0xc0012ddc20) (5) Data frame handling I0511 19:16:00.698948 7 log.go:172] (0xc002ea6420) Data frame received for 3 I0511 19:16:00.698976 7 log.go:172] (0xc0012ddae0) (3) Data frame handling I0511 19:16:00.701426 7 log.go:172] (0xc002ea6420) Data frame received for 1 I0511 19:16:00.701484 7 log.go:172] (0xc000b6dd60) (1) Data frame handling I0511 19:16:00.701581 7 log.go:172] (0xc000b6dd60) (1) Data frame sent I0511 19:16:00.701667 7 log.go:172] (0xc002ea6420) (0xc000b6dd60) Stream removed, broadcasting: 1 I0511 19:16:00.701834 7 log.go:172] (0xc002ea6420) (0xc000b6dd60) Stream removed, broadcasting: 1 I0511 19:16:00.701869 7 log.go:172] (0xc002ea6420) (0xc0012ddae0) Stream removed, broadcasting: 3 I0511 19:16:00.701958 7 log.go:172] (0xc002ea6420) (0xc0012ddc20) Stream removed, broadcasting: 5 May 11 19:16:00.702: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:16:00.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0511 19:16:00.702119 7 log.go:172] (0xc002ea6420) Go away received STEP: Destroying namespace "pod-network-test-9763" for this suite. • [SLOW TEST:25.421 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":98,"skipped":1487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:16:01.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 11 19:16:02.469: INFO: Waiting up to 5m0s for pod "var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283" in namespace "var-expansion-8641" to be "Succeeded or Failed" May 11 19:16:02.746: INFO: Pod "var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283": Phase="Pending", Reason="", readiness=false. Elapsed: 276.968212ms May 11 19:16:04.750: INFO: Pod "var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281008958s May 11 19:16:06.900: INFO: Pod "var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431151888s May 11 19:16:08.911: INFO: Pod "var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283": Phase="Running", Reason="", readiness=true. Elapsed: 6.441551821s May 11 19:16:11.327: INFO: Pod "var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.857880821s STEP: Saw pod success May 11 19:16:11.327: INFO: Pod "var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283" satisfied condition "Succeeded or Failed" May 11 19:16:11.330: INFO: Trying to get logs from node latest-worker pod var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283 container dapi-container: STEP: delete the pod May 11 19:16:12.047: INFO: Waiting for pod var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283 to disappear May 11 19:16:12.140: INFO: Pod var-expansion-be364eb1-dafc-4b53-8541-b5ff5e233283 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:16:12.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8641" for this suite. • [SLOW TEST:11.262 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":99,"skipped":1526,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:16:12.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-f1f045dc-d92f-492c-bfaa-79336266e6cd STEP: Creating a pod to test consume secrets May 11 19:16:13.001: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023" in namespace "projected-7729" to be "Succeeded or Failed" May 11 19:16:13.537: INFO: Pod "pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023": Phase="Pending", Reason="", readiness=false. Elapsed: 536.215368ms May 11 19:16:15.541: INFO: Pod "pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540304189s May 11 19:16:17.668: INFO: Pod "pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023": Phase="Pending", Reason="", readiness=false. Elapsed: 4.667260465s May 11 19:16:19.709: INFO: Pod "pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.708491052s STEP: Saw pod success May 11 19:16:19.710: INFO: Pod "pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023" satisfied condition "Succeeded or Failed" May 11 19:16:19.712: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023 container projected-secret-volume-test: STEP: delete the pod May 11 19:16:20.084: INFO: Waiting for pod pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023 to disappear May 11 19:16:20.362: INFO: Pod pod-projected-secrets-b4e8cecc-b6e5-4ed9-ba39-86745e6b9023 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:16:20.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7729" for this suite. • [SLOW TEST:8.060 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:16:20.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 11 19:16:20.581: INFO: PodSpec: initContainers in spec.initContainers May 11 19:17:27.238: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-08afe3a0-a868-4e63-b94b-c5d5c96b26de", GenerateName:"", Namespace:"init-container-2624", SelfLink:"/api/v1/namespaces/init-container-2624/pods/pod-init-08afe3a0-a868-4e63-b94b-c5d5c96b26de", UID:"5af00b2c-660f-42b1-b083-57d80185e42d", ResourceVersion:"3533944", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724821380, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"581367710"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c66040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c66060)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c66080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c660a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dwxtp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005b84000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dwxtp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dwxtp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dwxtp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0068b0098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d86000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0068b0120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0068b0140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0068b0148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0068b014c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821382, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821382, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821382, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821380, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.2.168", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.168"}}, StartTime:(*v1.Time)(0xc002c660c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002c66100), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d860e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://be7eaaa6bb70e3170ec523e5a14d343d30d03e89ee2525871571c1923763d07b", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c66120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c660e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0068b01df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:17:27.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2624" for this suite. • [SLOW TEST:67.280 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":101,"skipped":1579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:17:27.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:19:28.903: INFO: Deleting pod "var-expansion-9171b0f6-e707-4050-9677-d142edd7bae9" in namespace "var-expansion-217" May 11 19:19:28.907: INFO: Wait up to 5m0s for pod "var-expansion-9171b0f6-e707-4050-9677-d142edd7bae9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:19:31.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-217" for this suite. • [SLOW TEST:123.676 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":102,"skipped":1604,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:19:31.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:19:32.884: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:19:34.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821572, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821572, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821573, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821572, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:19:36.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821572, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821572, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821573, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821572, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:19:40.011: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:19:40.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8198-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:19:41.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8464" for this suite. STEP: Destroying namespace "webhook-8464-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.204 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":103,"skipped":1606,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:19:42.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 19:19:48.850: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:19:48.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5409" for this suite. • [SLOW TEST:6.434 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1615,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:19:48.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 19:19:56.540: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:19:57.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8767" for this suite. • [SLOW TEST:8.471 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":105,"skipped":1628,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:19:57.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 11 19:19:58.378: INFO: Created pod &Pod{ObjectMeta:{dns-8210 dns-8210 /api/v1/namespaces/dns-8210/pods/dns-8210 84142193-d89b-43a0-9e5d-f3dc0e138e09 3534782 0 2020-05-11 19:19:58 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-11 19:19:58 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7nkmk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7nkmk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7nkmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:19:58.469: INFO: The status of Pod dns-8210 is Pending, waiting for it to be Running (with Ready = true) May 11 19:20:00.753: INFO: The status of Pod dns-8210 is Pending, waiting for it to be Running (with Ready = true) May 11 19:20:02.754: INFO: The status of Pod dns-8210 is Pending, waiting for it to be Running (with Ready = true) May 11 19:20:04.473: INFO: The status of Pod dns-8210 is Pending, waiting for it to be Running (with Ready = true) May 11 19:20:06.514: INFO: The status of Pod dns-8210 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 11 19:20:06.514: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8210 PodName:dns-8210 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:20:06.514: INFO: >>> kubeConfig: /root/.kube/config I0511 19:20:06.623684 7 log.go:172] (0xc00303a370) (0xc0015afe00) Create stream I0511 19:20:06.623720 7 log.go:172] (0xc00303a370) (0xc0015afe00) Stream added, broadcasting: 1 I0511 19:20:06.625771 7 log.go:172] (0xc00303a370) Reply frame received for 1 I0511 19:20:06.625801 7 log.go:172] (0xc00303a370) (0xc0017a4320) Create stream I0511 19:20:06.625812 7 log.go:172] (0xc00303a370) (0xc0017a4320) Stream added, broadcasting: 3 I0511 19:20:06.626752 7 log.go:172] (0xc00303a370) Reply frame received for 3 I0511 19:20:06.626794 7 log.go:172] (0xc00303a370) (0xc001260000) Create stream I0511 19:20:06.626810 7 log.go:172] (0xc00303a370) (0xc001260000) Stream added, broadcasting: 5 I0511 19:20:06.627666 7 log.go:172] (0xc00303a370) Reply frame received for 5 I0511 19:20:06.701810 7 log.go:172] (0xc00303a370) Data frame received for 3 I0511 19:20:06.701835 7 log.go:172] (0xc0017a4320) (3) Data frame handling I0511 19:20:06.701852 7 log.go:172] (0xc0017a4320) (3) Data frame sent I0511 19:20:06.703218 7 log.go:172] (0xc00303a370) Data frame received for 5 I0511 19:20:06.703282 7 log.go:172] (0xc001260000) (5) Data frame handling I0511 19:20:06.703392 7 log.go:172] (0xc00303a370) Data frame received for 3 I0511 19:20:06.703414 7 log.go:172] (0xc0017a4320) (3) Data frame handling I0511 19:20:06.705578 7 log.go:172] (0xc00303a370) Data frame received for 1 I0511 19:20:06.705613 7 log.go:172] (0xc0015afe00) (1) Data frame handling I0511 19:20:06.705651 7 log.go:172] (0xc0015afe00) (1) Data frame sent I0511 19:20:06.705679 7 log.go:172] (0xc00303a370) (0xc0015afe00) Stream removed, broadcasting: 1 I0511 19:20:06.705710 7 log.go:172] (0xc00303a370) Go away received I0511 19:20:06.705848 7 log.go:172] (0xc00303a370) (0xc0015afe00) Stream removed, broadcasting: 1 I0511 19:20:06.705871 7 log.go:172] (0xc00303a370) (0xc0017a4320) Stream removed, broadcasting: 3 I0511 19:20:06.705881 7 log.go:172] (0xc00303a370) (0xc001260000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 11 19:20:06.705: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8210 PodName:dns-8210 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:20:06.705: INFO: >>> kubeConfig: /root/.kube/config I0511 19:20:06.853612 7 log.go:172] (0xc002ea6420) (0xc001342460) Create stream I0511 19:20:06.853672 7 log.go:172] (0xc002ea6420) (0xc001342460) Stream added, broadcasting: 1 I0511 19:20:06.855657 7 log.go:172] (0xc002ea6420) Reply frame received for 1 I0511 19:20:06.855705 7 log.go:172] (0xc002ea6420) (0xc001342500) Create stream I0511 19:20:06.855713 7 log.go:172] (0xc002ea6420) (0xc001342500) Stream added, broadcasting: 3 I0511 19:20:06.856594 7 log.go:172] (0xc002ea6420) Reply frame received for 3 I0511 19:20:06.856650 7 log.go:172] (0xc002ea6420) (0xc0015aff40) Create stream I0511 19:20:06.856678 7 log.go:172] (0xc002ea6420) (0xc0015aff40) Stream added, broadcasting: 5 I0511 19:20:06.857801 7 log.go:172] (0xc002ea6420) Reply frame received for 5 I0511 19:20:06.922954 7 log.go:172] (0xc002ea6420) Data frame received for 3 I0511 19:20:06.922995 7 log.go:172] (0xc001342500) (3) Data frame handling I0511 19:20:06.923017 7 log.go:172] (0xc001342500) (3) Data frame sent I0511 19:20:06.924167 7 log.go:172] (0xc002ea6420) Data frame received for 3 I0511 19:20:06.924194 7 log.go:172] (0xc001342500) (3) Data frame handling I0511 19:20:06.924224 7 log.go:172] (0xc002ea6420) Data frame received for 5 I0511 19:20:06.924257 7 log.go:172] (0xc0015aff40) (5) Data frame handling I0511 19:20:06.926451 7 log.go:172] (0xc002ea6420) Data frame received for 1 I0511 19:20:06.926464 7 log.go:172] (0xc001342460) (1) Data frame handling I0511 19:20:06.926484 7 log.go:172] (0xc001342460) (1) Data frame sent I0511 19:20:06.926501 7 log.go:172] (0xc002ea6420) (0xc001342460) Stream removed, broadcasting: 1 I0511 19:20:06.926518 7 log.go:172] (0xc002ea6420) Go away received I0511 19:20:06.926648 7 log.go:172] (0xc002ea6420) (0xc001342460) Stream removed, broadcasting: 1 I0511 19:20:06.926683 7 log.go:172] (0xc002ea6420) (0xc001342500) Stream removed, broadcasting: 3 I0511 19:20:06.926702 7 log.go:172] (0xc002ea6420) (0xc0015aff40) Stream removed, broadcasting: 5 May 11 19:20:06.926: INFO: Deleting pod dns-8210... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:20:07.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8210" for this suite. • [SLOW TEST:10.602 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":106,"skipped":1643,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:20:08.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4512 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4512 I0511 19:20:09.848512 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4512, replica count: 2 I0511 19:20:12.898939 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:15.899135 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:18.899376 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:21.899567 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 19:20:21.899: INFO: Creating new exec pod May 11 19:20:36.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4512 execpod25zhm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 11 19:20:47.420: INFO: stderr: "I0511 19:20:47.298891 1014 log.go:172] (0xc00003b080) (0xc000737540) Create stream\nI0511 19:20:47.298936 1014 log.go:172] (0xc00003b080) (0xc000737540) Stream added, broadcasting: 1\nI0511 19:20:47.300445 1014 log.go:172] (0xc00003b080) Reply frame received for 1\nI0511 19:20:47.300488 1014 log.go:172] (0xc00003b080) (0xc000718d20) Create stream\nI0511 19:20:47.300499 1014 log.go:172] (0xc00003b080) (0xc000718d20) Stream added, broadcasting: 3\nI0511 19:20:47.301305 1014 log.go:172] (0xc00003b080) Reply frame received for 3\nI0511 19:20:47.301332 1014 log.go:172] (0xc00003b080) (0xc000719cc0) Create stream\nI0511 19:20:47.301343 1014 log.go:172] (0xc00003b080) (0xc000719cc0) Stream added, broadcasting: 5\nI0511 19:20:47.301960 1014 log.go:172] (0xc00003b080) Reply frame received for 5\nI0511 19:20:47.412263 1014 log.go:172] (0xc00003b080) Data frame received for 5\nI0511 19:20:47.412283 1014 log.go:172] (0xc000719cc0) (5) Data frame handling\nI0511 19:20:47.412295 1014 log.go:172] (0xc000719cc0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0511 19:20:47.413817 1014 log.go:172] (0xc00003b080) Data frame received for 5\nI0511 19:20:47.413844 1014 log.go:172] (0xc000719cc0) (5) Data frame handling\nI0511 19:20:47.413865 1014 log.go:172] (0xc000719cc0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0511 19:20:47.414031 1014 log.go:172] (0xc00003b080) Data frame received for 5\nI0511 19:20:47.414059 1014 log.go:172] (0xc00003b080) Data frame received for 3\nI0511 19:20:47.414082 1014 log.go:172] (0xc000718d20) (3) Data frame handling\nI0511 19:20:47.414097 1014 log.go:172] (0xc000719cc0) (5) Data frame handling\nI0511 19:20:47.415580 1014 log.go:172] (0xc00003b080) Data frame received for 1\nI0511 19:20:47.415595 1014 log.go:172] (0xc000737540) (1) Data frame handling\nI0511 19:20:47.415606 1014 log.go:172] (0xc000737540) (1) Data frame sent\nI0511 19:20:47.415615 1014 log.go:172] (0xc00003b080) (0xc000737540) Stream removed, broadcasting: 1\nI0511 19:20:47.415859 1014 log.go:172] (0xc00003b080) (0xc000737540) Stream removed, broadcasting: 1\nI0511 19:20:47.415872 1014 log.go:172] (0xc00003b080) (0xc000718d20) Stream removed, broadcasting: 3\nI0511 19:20:47.415879 1014 log.go:172] (0xc00003b080) (0xc000719cc0) Stream removed, broadcasting: 5\n" May 11 19:20:47.421: INFO: stdout: "" May 11 19:20:47.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4512 execpod25zhm -- /bin/sh -x -c nc -zv -t -w 2 10.107.83.166 80' May 11 19:20:47.629: INFO: stderr: "I0511 19:20:47.551315 1041 log.go:172] (0xc0009bdb80) (0xc0007c5ae0) Create stream\nI0511 19:20:47.551428 1041 log.go:172] (0xc0009bdb80) (0xc0007c5ae0) Stream added, broadcasting: 1\nI0511 19:20:47.553659 1041 log.go:172] (0xc0009bdb80) Reply frame received for 1\nI0511 19:20:47.553696 1041 log.go:172] (0xc0009bdb80) (0xc0007d0500) Create stream\nI0511 19:20:47.553709 1041 log.go:172] (0xc0009bdb80) (0xc0007d0500) Stream added, broadcasting: 3\nI0511 19:20:47.554466 1041 log.go:172] (0xc0009bdb80) Reply frame received for 3\nI0511 19:20:47.554497 1041 log.go:172] (0xc0009bdb80) (0xc0007d8dc0) Create stream\nI0511 19:20:47.554508 1041 log.go:172] (0xc0009bdb80) (0xc0007d8dc0) Stream added, broadcasting: 5\nI0511 19:20:47.555484 1041 log.go:172] (0xc0009bdb80) Reply frame received for 5\nI0511 19:20:47.620316 1041 log.go:172] (0xc0009bdb80) Data frame received for 5\nI0511 19:20:47.620336 1041 log.go:172] (0xc0007d8dc0) (5) Data frame handling\nI0511 19:20:47.620349 1041 log.go:172] (0xc0007d8dc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.107.83.166 80\nI0511 19:20:47.623861 1041 log.go:172] (0xc0009bdb80) Data frame received for 3\nI0511 19:20:47.623884 1041 log.go:172] (0xc0007d0500) (3) Data frame handling\nI0511 19:20:47.623898 1041 log.go:172] (0xc0009bdb80) Data frame received for 5\nI0511 19:20:47.623916 1041 log.go:172] (0xc0007d8dc0) (5) Data frame handling\nI0511 19:20:47.623943 1041 log.go:172] (0xc0007d8dc0) (5) Data frame sent\nConnection to 10.107.83.166 80 port [tcp/http] succeeded!\nI0511 19:20:47.623958 1041 log.go:172] (0xc0009bdb80) Data frame received for 1\nI0511 19:20:47.623981 1041 log.go:172] (0xc0007c5ae0) (1) Data frame handling\nI0511 19:20:47.623991 1041 log.go:172] (0xc0007c5ae0) (1) Data frame sent\nI0511 19:20:47.624013 1041 log.go:172] (0xc0009bdb80) (0xc0007c5ae0) Stream removed, broadcasting: 1\nI0511 19:20:47.624047 1041 log.go:172] (0xc0009bdb80) Data frame received for 5\nI0511 19:20:47.624058 1041 log.go:172] (0xc0007d8dc0) (5) Data frame handling\nI0511 19:20:47.624072 1041 log.go:172] (0xc0009bdb80) Go away received\nI0511 19:20:47.624366 1041 log.go:172] (0xc0009bdb80) (0xc0007c5ae0) Stream removed, broadcasting: 1\nI0511 19:20:47.624379 1041 log.go:172] (0xc0009bdb80) (0xc0007d0500) Stream removed, broadcasting: 3\nI0511 19:20:47.624386 1041 log.go:172] (0xc0009bdb80) (0xc0007d8dc0) Stream removed, broadcasting: 5\n" May 11 19:20:47.629: INFO: stdout: "" May 11 19:20:47.629: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:20:47.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4512" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:39.743 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":107,"skipped":1663,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:20:47.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1ea39741-f42f-4c87-bf26-c68430eb7aab STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-1ea39741-f42f-4c87-bf26-c68430eb7aab STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:22:22.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2974" for this suite. • [SLOW TEST:94.423 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":108,"skipped":1670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:22:22.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-be95865c-df76-4fd1-9ae9-327b8965e723 STEP: updating the pod May 11 19:22:33.156: INFO: Successfully updated pod "var-expansion-be95865c-df76-4fd1-9ae9-327b8965e723" STEP: waiting for pod and container restart STEP: Failing liveness probe May 11 19:22:33.177: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-6570 PodName:var-expansion-be95865c-df76-4fd1-9ae9-327b8965e723 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:22:33.177: INFO: >>> kubeConfig: /root/.kube/config I0511 19:22:33.214870 7 log.go:172] (0xc00294e370) (0xc000f872c0) Create stream I0511 19:22:33.214899 7 log.go:172] (0xc00294e370) (0xc000f872c0) Stream added, broadcasting: 1 I0511 19:22:33.216387 7 log.go:172] (0xc00294e370) Reply frame received for 1 I0511 19:22:33.216414 7 log.go:172] (0xc00294e370) (0xc000dbf860) Create stream I0511 19:22:33.216423 7 log.go:172] (0xc00294e370) (0xc000dbf860) Stream added, broadcasting: 3 I0511 19:22:33.217232 7 log.go:172] (0xc00294e370) Reply frame received for 3 I0511 19:22:33.217280 7 log.go:172] (0xc00294e370) (0xc000f87400) Create stream I0511 19:22:33.217294 7 log.go:172] (0xc00294e370) (0xc000f87400) Stream added, broadcasting: 5 I0511 19:22:33.218111 7 log.go:172] (0xc00294e370) Reply frame received for 5 I0511 19:22:33.268887 7 log.go:172] (0xc00294e370) Data frame received for 3 I0511 19:22:33.268912 7 log.go:172] (0xc000dbf860) (3) Data frame handling I0511 19:22:33.268929 7 log.go:172] (0xc00294e370) Data frame received for 5 I0511 19:22:33.268934 7 log.go:172] (0xc000f87400) (5) Data frame handling I0511 19:22:33.270668 7 log.go:172] (0xc00294e370) Data frame received for 1 I0511 19:22:33.270703 7 log.go:172] (0xc000f872c0) (1) Data frame handling I0511 19:22:33.270737 7 log.go:172] (0xc000f872c0) (1) Data frame sent I0511 19:22:33.270760 7 log.go:172] (0xc00294e370) (0xc000f872c0) Stream removed, broadcasting: 1 I0511 19:22:33.270848 7 log.go:172] (0xc00294e370) Go away received I0511 19:22:33.270907 7 log.go:172] (0xc00294e370) (0xc000f872c0) Stream removed, broadcasting: 1 I0511 19:22:33.270950 7 log.go:172] (0xc00294e370) (0xc000dbf860) Stream removed, broadcasting: 3 I0511 19:22:33.270963 7 log.go:172] (0xc00294e370) (0xc000f87400) Stream removed, broadcasting: 5 May 11 19:22:33.270: INFO: Pod exec output: / STEP: Waiting for container to restart May 11 19:22:33.282: INFO: Container dapi-container, restarts: 0 May 11 19:22:43.396: INFO: Container dapi-container, restarts: 0 May 11 19:22:53.641: INFO: Container dapi-container, restarts: 0 May 11 19:23:03.285: INFO: Container dapi-container, restarts: 0 May 11 19:23:13.384: INFO: Container dapi-container, restarts: 1 May 11 19:23:13.384: INFO: Container has restart count: 1 STEP: Rewriting the file May 11 19:23:13.384: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-6570 PodName:var-expansion-be95865c-df76-4fd1-9ae9-327b8965e723 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:23:13.384: INFO: >>> kubeConfig: /root/.kube/config I0511 19:23:13.473698 7 log.go:172] (0xc002ea6000) (0xc000f2edc0) Create stream I0511 19:23:13.473743 7 log.go:172] (0xc002ea6000) (0xc000f2edc0) Stream added, broadcasting: 1 I0511 19:23:13.475747 7 log.go:172] (0xc002ea6000) Reply frame received for 1 I0511 19:23:13.475787 7 log.go:172] (0xc002ea6000) (0xc000ae8820) Create stream I0511 19:23:13.475810 7 log.go:172] (0xc002ea6000) (0xc000ae8820) Stream added, broadcasting: 3 I0511 19:23:13.477555 7 log.go:172] (0xc002ea6000) Reply frame received for 3 I0511 19:23:13.477603 7 log.go:172] (0xc002ea6000) (0xc0013a9040) Create stream I0511 19:23:13.477621 7 log.go:172] (0xc002ea6000) (0xc0013a9040) Stream added, broadcasting: 5 I0511 19:23:13.478531 7 log.go:172] (0xc002ea6000) Reply frame received for 5 I0511 19:23:13.531207 7 log.go:172] (0xc002ea6000) Data frame received for 3 I0511 19:23:13.531263 7 log.go:172] (0xc000ae8820) (3) Data frame handling I0511 19:23:13.531653 7 log.go:172] (0xc002ea6000) Data frame received for 5 I0511 19:23:13.531667 7 log.go:172] (0xc0013a9040) (5) Data frame handling I0511 19:23:13.533066 7 log.go:172] (0xc002ea6000) Data frame received for 1 I0511 19:23:13.533095 7 log.go:172] (0xc000f2edc0) (1) Data frame handling I0511 19:23:13.533264 7 log.go:172] (0xc000f2edc0) (1) Data frame sent I0511 19:23:13.533296 7 log.go:172] (0xc002ea6000) (0xc000f2edc0) Stream removed, broadcasting: 1 I0511 19:23:13.533323 7 log.go:172] (0xc002ea6000) Go away received I0511 19:23:13.533455 7 log.go:172] (0xc002ea6000) (0xc000f2edc0) Stream removed, broadcasting: 1 I0511 19:23:13.533483 7 log.go:172] (0xc002ea6000) (0xc000ae8820) Stream removed, broadcasting: 3 I0511 19:23:13.533494 7 log.go:172] (0xc002ea6000) (0xc0013a9040) Stream removed, broadcasting: 5 May 11 19:23:13.533: INFO: Exec stderr: "" May 11 19:23:13.533: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 11 19:23:45.568: INFO: Container has restart count: 2 May 11 19:24:47.582: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 11 19:24:47.584: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-6570 PodName:var-expansion-be95865c-df76-4fd1-9ae9-327b8965e723 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:24:47.584: INFO: >>> kubeConfig: /root/.kube/config I0511 19:24:47.614040 7 log.go:172] (0xc00294e420) (0xc002dcee60) Create stream I0511 19:24:47.614063 7 log.go:172] (0xc00294e420) (0xc002dcee60) Stream added, broadcasting: 1 I0511 19:24:47.615489 7 log.go:172] (0xc00294e420) Reply frame received for 1 I0511 19:24:47.615534 7 log.go:172] (0xc00294e420) (0xc0017a4000) Create stream I0511 19:24:47.615597 7 log.go:172] (0xc00294e420) (0xc0017a4000) Stream added, broadcasting: 3 I0511 19:24:47.616253 7 log.go:172] (0xc00294e420) Reply frame received for 3 I0511 19:24:47.616278 7 log.go:172] (0xc00294e420) (0xc002dcef00) Create stream I0511 19:24:47.616286 7 log.go:172] (0xc00294e420) (0xc002dcef00) Stream added, broadcasting: 5 I0511 19:24:47.616899 7 log.go:172] (0xc00294e420) Reply frame received for 5 I0511 19:24:47.661558 7 log.go:172] (0xc00294e420) Data frame received for 3 I0511 19:24:47.661606 7 log.go:172] (0xc0017a4000) (3) Data frame handling I0511 19:24:47.661628 7 log.go:172] (0xc00294e420) Data frame received for 5 I0511 19:24:47.661663 7 log.go:172] (0xc002dcef00) (5) Data frame handling I0511 19:24:47.663060 7 log.go:172] (0xc00294e420) Data frame received for 1 I0511 19:24:47.663083 7 log.go:172] (0xc002dcee60) (1) Data frame handling I0511 19:24:47.663101 7 log.go:172] (0xc002dcee60) (1) Data frame sent I0511 19:24:47.663117 7 log.go:172] (0xc00294e420) (0xc002dcee60) Stream removed, broadcasting: 1 I0511 19:24:47.663206 7 log.go:172] (0xc00294e420) (0xc002dcee60) Stream removed, broadcasting: 1 I0511 19:24:47.663238 7 log.go:172] (0xc00294e420) (0xc0017a4000) Stream removed, broadcasting: 3 I0511 19:24:47.663403 7 log.go:172] (0xc00294e420) (0xc002dcef00) Stream removed, broadcasting: 5 I0511 19:24:47.663762 7 log.go:172] (0xc00294e420) Go away received May 11 19:24:47.666: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-6570 PodName:var-expansion-be95865c-df76-4fd1-9ae9-327b8965e723 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:24:47.666: INFO: >>> kubeConfig: /root/.kube/config I0511 19:24:47.721908 7 log.go:172] (0xc00303a210) (0xc0017a46e0) Create stream I0511 19:24:47.721938 7 log.go:172] (0xc00303a210) (0xc0017a46e0) Stream added, broadcasting: 1 I0511 19:24:47.723361 7 log.go:172] (0xc00303a210) Reply frame received for 1 I0511 19:24:47.723392 7 log.go:172] (0xc00303a210) (0xc000b6c820) Create stream I0511 19:24:47.723402 7 log.go:172] (0xc00303a210) (0xc000b6c820) Stream added, broadcasting: 3 I0511 19:24:47.724041 7 log.go:172] (0xc00303a210) Reply frame received for 3 I0511 19:24:47.724070 7 log.go:172] (0xc00303a210) (0xc0017a4be0) Create stream I0511 19:24:47.724079 7 log.go:172] (0xc00303a210) (0xc0017a4be0) Stream added, broadcasting: 5 I0511 19:24:47.724655 7 log.go:172] (0xc00303a210) Reply frame received for 5 I0511 19:24:47.779098 7 log.go:172] (0xc00303a210) Data frame received for 5 I0511 19:24:47.779134 7 log.go:172] (0xc00303a210) Data frame received for 3 I0511 19:24:47.779170 7 log.go:172] (0xc000b6c820) (3) Data frame handling I0511 19:24:47.779193 7 log.go:172] (0xc0017a4be0) (5) Data frame handling I0511 19:24:47.780212 7 log.go:172] (0xc00303a210) Data frame received for 1 I0511 19:24:47.780234 7 log.go:172] (0xc0017a46e0) (1) Data frame handling I0511 19:24:47.780251 7 log.go:172] (0xc0017a46e0) (1) Data frame sent I0511 19:24:47.780262 7 log.go:172] (0xc00303a210) (0xc0017a46e0) Stream removed, broadcasting: 1 I0511 19:24:47.780272 7 log.go:172] (0xc00303a210) Go away received I0511 19:24:47.780395 7 log.go:172] (0xc00303a210) (0xc0017a46e0) Stream removed, broadcasting: 1 I0511 19:24:47.780406 7 log.go:172] (0xc00303a210) (0xc000b6c820) Stream removed, broadcasting: 3 I0511 19:24:47.780414 7 log.go:172] (0xc00303a210) (0xc0017a4be0) Stream removed, broadcasting: 5 May 11 19:24:47.780: INFO: Deleting pod "var-expansion-be95865c-df76-4fd1-9ae9-327b8965e723" in namespace "var-expansion-6570" May 11 19:24:47.858: INFO: Wait up to 5m0s for pod "var-expansion-be95865c-df76-4fd1-9ae9-327b8965e723" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:25:21.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6570" for this suite. • [SLOW TEST:179.663 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":109,"skipped":1701,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:25:21.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:25:23.106: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:25:25.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821923, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821923, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821923, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821923, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:25:27.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821923, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821923, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821923, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724821923, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:25:32.291: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 11 19:25:38.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-6213 to-be-attached-pod -i -c=container1' May 11 19:25:38.465: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:25:38.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6213" for this suite. STEP: Destroying namespace "webhook-6213-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.877 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":110,"skipped":1706,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:25:38.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 19:25:38.813: INFO: Waiting up to 5m0s for pod "downward-api-e82234f2-52b9-4469-b8ae-5d0f367657af" in namespace "downward-api-4531" to be "Succeeded or Failed" May 11 19:25:38.831: INFO: Pod "downward-api-e82234f2-52b9-4469-b8ae-5d0f367657af": Phase="Pending", Reason="", readiness=false. Elapsed: 17.967046ms May 11 19:25:40.876: INFO: Pod "downward-api-e82234f2-52b9-4469-b8ae-5d0f367657af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062335549s May 11 19:25:42.942: INFO: Pod "downward-api-e82234f2-52b9-4469-b8ae-5d0f367657af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128451844s STEP: Saw pod success May 11 19:25:42.942: INFO: Pod "downward-api-e82234f2-52b9-4469-b8ae-5d0f367657af" satisfied condition "Succeeded or Failed" May 11 19:25:42.945: INFO: Trying to get logs from node latest-worker2 pod downward-api-e82234f2-52b9-4469-b8ae-5d0f367657af container dapi-container: STEP: delete the pod May 11 19:25:43.188: INFO: Waiting for pod downward-api-e82234f2-52b9-4469-b8ae-5d0f367657af to disappear May 11 19:25:43.204: INFO: Pod downward-api-e82234f2-52b9-4469-b8ae-5d0f367657af no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:25:43.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4531" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":111,"skipped":1718,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:25:43.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 11 19:25:43.312: INFO: Waiting up to 5m0s for pod "var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef" in namespace "var-expansion-7675" to be "Succeeded or Failed" May 11 19:25:43.348: INFO: Pod "var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 35.306981ms May 11 19:25:46.584: INFO: Pod "var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.271842929s May 11 19:25:48.878: INFO: Pod "var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.565177936s May 11 19:25:51.193: INFO: Pod "var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef": Phase="Running", Reason="", readiness=true. Elapsed: 7.880907219s May 11 19:25:53.197: INFO: Pod "var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.884829331s STEP: Saw pod success May 11 19:25:53.197: INFO: Pod "var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef" satisfied condition "Succeeded or Failed" May 11 19:25:53.200: INFO: Trying to get logs from node latest-worker2 pod var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef container dapi-container: STEP: delete the pod May 11 19:25:53.362: INFO: Waiting for pod var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef to disappear May 11 19:25:53.400: INFO: Pod var-expansion-19bd4fac-8004-4088-8b40-54380f2ce9ef no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:25:53.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7675" for this suite. • [SLOW TEST:10.197 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":112,"skipped":1732,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:25:53.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-45b63220-88d8-4cf7-bc53-166a454fdca5 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-45b63220-88d8-4cf7-bc53-166a454fdca5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:26:00.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-829" for this suite. • [SLOW TEST:6.739 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":1743,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:26:00.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-5ed3ac16-57eb-4119-9ce3-12f666ca3a79 STEP: Creating a pod to test consume configMaps May 11 19:26:00.494: INFO: Waiting up to 5m0s for pod "pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333" in namespace "configmap-474" to be "Succeeded or Failed" May 11 19:26:00.601: INFO: Pod "pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333": Phase="Pending", Reason="", readiness=false. Elapsed: 106.651204ms May 11 19:26:03.074: INFO: Pod "pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333": Phase="Pending", Reason="", readiness=false. Elapsed: 2.579865172s May 11 19:26:05.077: INFO: Pod "pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5827532s May 11 19:26:07.080: INFO: Pod "pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333": Phase="Running", Reason="", readiness=true. Elapsed: 6.585422216s May 11 19:26:09.886: INFO: Pod "pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.391652622s STEP: Saw pod success May 11 19:26:09.886: INFO: Pod "pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333" satisfied condition "Succeeded or Failed" May 11 19:26:10.260: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333 container configmap-volume-test: STEP: delete the pod May 11 19:26:12.301: INFO: Waiting for pod pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333 to disappear May 11 19:26:12.317: INFO: Pod pod-configmaps-4582ebcc-853a-433a-af18-09508eb66333 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:26:12.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-474" for this suite. • [SLOW TEST:12.327 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":1746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:26:12.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 11 19:26:13.118: INFO: >>> kubeConfig: /root/.kube/config May 11 19:26:15.066: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:26:29.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7745" for this suite. • [SLOW TEST:16.643 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":115,"skipped":1779,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:26:29.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 11 19:26:29.206: INFO: namespace kubectl-7955 May 11 19:26:29.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7955' May 11 19:26:29.566: INFO: stderr: "" May 11 19:26:29.566: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 19:26:30.570: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:26:30.570: INFO: Found 0 / 1 May 11 19:26:31.569: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:26:31.569: INFO: Found 0 / 1 May 11 19:26:32.569: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:26:32.569: INFO: Found 0 / 1 May 11 19:26:33.691: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:26:33.691: INFO: Found 1 / 1 May 11 19:26:33.691: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 19:26:33.695: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:26:33.695: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 19:26:33.695: INFO: wait on agnhost-master startup in kubectl-7955 May 11 19:26:33.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-5hk6p agnhost-master --namespace=kubectl-7955' May 11 19:26:33.879: INFO: stderr: "" May 11 19:26:33.879: INFO: stdout: "Paused\n" STEP: exposing RC May 11 19:26:33.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7955' May 11 19:26:34.265: INFO: stderr: "" May 11 19:26:34.265: INFO: stdout: "service/rm2 exposed\n" May 11 19:26:34.410: INFO: Service rm2 in namespace kubectl-7955 found. STEP: exposing service May 11 19:26:36.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7955' May 11 19:26:37.169: INFO: stderr: "" May 11 19:26:37.169: INFO: stdout: "service/rm3 exposed\n" May 11 19:26:37.345: INFO: Service rm3 in namespace kubectl-7955 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:26:39.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7955" for this suite. • [SLOW TEST:10.239 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":116,"skipped":1786,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:26:39.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 11 19:26:39.477: INFO: Waiting up to 5m0s for pod "var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d" in namespace "var-expansion-3209" to be "Succeeded or Failed" May 11 19:26:39.482: INFO: Pod "var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.185354ms May 11 19:26:42.302: INFO: Pod "var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824727294s May 11 19:26:44.306: INFO: Pod "var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.828700155s May 11 19:26:46.457: INFO: Pod "var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.980546948s STEP: Saw pod success May 11 19:26:46.458: INFO: Pod "var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d" satisfied condition "Succeeded or Failed" May 11 19:26:46.461: INFO: Trying to get logs from node latest-worker2 pod var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d container dapi-container: STEP: delete the pod May 11 19:26:46.784: INFO: Waiting for pod var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d to disappear May 11 19:26:47.056: INFO: Pod var-expansion-bff405c7-a15d-4920-8f8e-5cbdd6b4890d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:26:47.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3209" for this suite. • [SLOW TEST:8.047 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":117,"skipped":1788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:26:47.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:26:48.871: INFO: Waiting up to 5m0s for pod "busybox-user-65534-833c3252-8325-4440-bdd4-d84a6dffd875" in namespace "security-context-test-5608" to be "Succeeded or Failed" May 11 19:26:48.947: INFO: Pod "busybox-user-65534-833c3252-8325-4440-bdd4-d84a6dffd875": Phase="Pending", Reason="", readiness=false. Elapsed: 75.949346ms May 11 19:26:50.950: INFO: Pod "busybox-user-65534-833c3252-8325-4440-bdd4-d84a6dffd875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079172117s May 11 19:26:52.953: INFO: Pod "busybox-user-65534-833c3252-8325-4440-bdd4-d84a6dffd875": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082273243s May 11 19:26:55.005: INFO: Pod "busybox-user-65534-833c3252-8325-4440-bdd4-d84a6dffd875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13404571s May 11 19:26:55.005: INFO: Pod "busybox-user-65534-833c3252-8325-4440-bdd4-d84a6dffd875" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:26:55.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5608" for this suite. • [SLOW TEST:7.672 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":118,"skipped":1826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:26:55.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 11 19:26:57.541: INFO: Waiting up to 1m0s for all nodes to be ready May 11 19:27:57.559: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:27:57.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 11 19:28:01.735: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:28:24.027: INFO: pods created so far: [1 1 1] May 11 19:28:24.027: INFO: length of pods created so far: 3 May 11 19:28:40.035: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:28:47.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6700" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:28:48.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3441" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:113.638 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":119,"skipped":1892,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:28:48.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 11 19:28:48.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-9974 -- logs-generator --log-lines-total 100 --run-duration 20s' May 11 19:28:49.107: INFO: stderr: "" May 11 19:28:49.108: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 11 19:28:49.108: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 11 19:28:49.108: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9974" to be "running and ready, or succeeded" May 11 19:28:49.150: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 42.248632ms May 11 19:28:51.153: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045551214s May 11 19:28:53.296: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188633668s May 11 19:28:55.399: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.290877595s May 11 19:28:55.399: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 11 19:28:55.399: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 11 19:28:55.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9974' May 11 19:28:55.542: INFO: stderr: "" May 11 19:28:55.542: INFO: stdout: "I0511 19:28:52.208872 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/xwxm 560\nI0511 19:28:52.408989 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/4n82 315\nI0511 19:28:52.608979 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/8f5 345\nI0511 19:28:52.809093 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/thsd 382\nI0511 19:28:53.009028 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/77x 330\nI0511 19:28:53.209243 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/sg7 275\nI0511 19:28:53.409081 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/jc25 359\nI0511 19:28:53.609019 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/x2l 434\nI0511 19:28:53.809043 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/xv6r 409\nI0511 19:28:54.009036 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/j8nc 415\nI0511 19:28:54.209104 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/mnjk 457\nI0511 19:28:54.409033 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/rdb 592\nI0511 19:28:54.609063 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/2z78 549\nI0511 19:28:54.808987 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/bnrm 289\nI0511 19:28:55.009065 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/pzj 505\nI0511 19:28:55.209037 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/mzrx 566\nI0511 19:28:55.409016 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/m5m 200\n" STEP: limiting log lines May 11 19:28:55.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9974 --tail=1' May 11 19:28:55.683: INFO: stderr: "" May 11 19:28:55.683: INFO: stdout: "I0511 19:28:55.608987 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/kc7 299\n" May 11 19:28:55.683: INFO: got output "I0511 19:28:55.608987 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/kc7 299\n" STEP: limiting log bytes May 11 19:28:55.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9974 --limit-bytes=1' May 11 19:28:55.803: INFO: stderr: "" May 11 19:28:55.803: INFO: stdout: "I" May 11 19:28:55.803: INFO: got output "I" STEP: exposing timestamps May 11 19:28:55.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9974 --tail=1 --timestamps' May 11 19:28:56.202: INFO: stderr: "" May 11 19:28:56.202: INFO: stdout: "2020-05-11T19:28:56.009270743Z I0511 19:28:56.008998 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/74jl 308\n" May 11 19:28:56.202: INFO: got output "2020-05-11T19:28:56.009270743Z I0511 19:28:56.008998 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/74jl 308\n" STEP: restricting to a time range May 11 19:28:58.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9974 --since=1s' May 11 19:28:58.812: INFO: stderr: "" May 11 19:28:58.812: INFO: stdout: "I0511 19:28:57.809041 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/2s57 504\nI0511 19:28:58.009044 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/hkxj 538\nI0511 19:28:58.209007 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/default/pods/d2n 575\nI0511 19:28:58.408998 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/rxb 295\nI0511 19:28:58.608993 1 logs_generator.go:76] 32 GET /api/v1/namespaces/default/pods/j8r 392\n" May 11 19:28:58.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9974 --since=24h' May 11 19:28:59.025: INFO: stderr: "" May 11 19:28:59.025: INFO: stdout: "I0511 19:28:52.208872 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/xwxm 560\nI0511 19:28:52.408989 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/4n82 315\nI0511 19:28:52.608979 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/8f5 345\nI0511 19:28:52.809093 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/thsd 382\nI0511 19:28:53.009028 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/77x 330\nI0511 19:28:53.209243 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/sg7 275\nI0511 19:28:53.409081 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/jc25 359\nI0511 19:28:53.609019 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/x2l 434\nI0511 19:28:53.809043 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/xv6r 409\nI0511 19:28:54.009036 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/j8nc 415\nI0511 19:28:54.209104 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/mnjk 457\nI0511 19:28:54.409033 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/rdb 592\nI0511 19:28:54.609063 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/2z78 549\nI0511 19:28:54.808987 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/bnrm 289\nI0511 19:28:55.009065 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/pzj 505\nI0511 19:28:55.209037 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/mzrx 566\nI0511 19:28:55.409016 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/m5m 200\nI0511 19:28:55.608987 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/kc7 299\nI0511 19:28:55.809037 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/gtkx 569\nI0511 19:28:56.008998 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/74jl 308\nI0511 19:28:56.209055 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/ztms 377\nI0511 19:28:56.409027 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/rpfj 324\nI0511 19:28:56.609022 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/9f8k 356\nI0511 19:28:56.809047 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/dhr 559\nI0511 19:28:57.009033 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/mk9x 273\nI0511 19:28:57.208977 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/8vm 573\nI0511 19:28:57.409022 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/tnpj 245\nI0511 19:28:57.609008 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/kc6q 252\nI0511 19:28:57.809041 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/2s57 504\nI0511 19:28:58.009044 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/kube-system/pods/hkxj 538\nI0511 19:28:58.209007 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/default/pods/d2n 575\nI0511 19:28:58.408998 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/rxb 295\nI0511 19:28:58.608993 1 logs_generator.go:76] 32 GET /api/v1/namespaces/default/pods/j8r 392\nI0511 19:28:58.809020 1 logs_generator.go:76] 33 GET /api/v1/namespaces/ns/pods/54gm 316\nI0511 19:28:59.008977 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/default/pods/xrpk 306\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 11 19:28:59.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9974' May 11 19:29:05.369: INFO: stderr: "" May 11 19:29:05.369: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:29:05.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9974" for this suite. • [SLOW TEST:16.660 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":120,"skipped":1917,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:29:05.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:29:06.090: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:29:08.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822146, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822146, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822146, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822146, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:29:10.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822146, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822146, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822146, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822146, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:29:13.782: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:29:14.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6118" for this suite. STEP: Destroying namespace "webhook-6118-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.487 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":121,"skipped":1939,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:29:14.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:29:28.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5194" for this suite. • [SLOW TEST:14.014 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":122,"skipped":1960,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:29:28.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:29:29.004: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 19:29:31.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9902 create -f -' May 11 19:29:38.055: INFO: stderr: "" May 11 19:29:38.055: INFO: stdout: "e2e-test-crd-publish-openapi-9056-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 11 19:29:38.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9902 delete e2e-test-crd-publish-openapi-9056-crds test-cr' May 11 19:29:38.168: INFO: stderr: "" May 11 19:29:38.168: INFO: stdout: "e2e-test-crd-publish-openapi-9056-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 11 19:29:38.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9902 apply -f -' May 11 19:29:38.432: INFO: stderr: "" May 11 19:29:38.432: INFO: stdout: "e2e-test-crd-publish-openapi-9056-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 11 19:29:38.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9902 delete e2e-test-crd-publish-openapi-9056-crds test-cr' May 11 19:29:38.539: INFO: stderr: "" May 11 19:29:38.539: INFO: stdout: "e2e-test-crd-publish-openapi-9056-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 11 19:29:38.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9056-crds' May 11 19:29:38.795: INFO: stderr: "" May 11 19:29:38.795: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9056-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:29:41.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9902" for this suite. • [SLOW TEST:12.859 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":123,"skipped":1971,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:29:41.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 11 19:29:42.343: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:29:42.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3290" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":124,"skipped":1985,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:29:42.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:29:44.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5278" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":125,"skipped":2006,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:29:45.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5916 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5916 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5916 May 11 19:29:47.896: INFO: Found 0 stateful pods, waiting for 1 May 11 19:29:58.514: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 11 19:29:58.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 19:29:59.843: INFO: stderr: "I0511 19:29:59.124975 1438 log.go:172] (0xc000b5d3f0) (0xc000b268c0) Create stream\nI0511 19:29:59.125042 1438 log.go:172] (0xc000b5d3f0) (0xc000b268c0) Stream added, broadcasting: 1\nI0511 19:29:59.127819 1438 log.go:172] (0xc000b5d3f0) Reply frame received for 1\nI0511 19:29:59.127855 1438 log.go:172] (0xc000b5d3f0) (0xc00096a500) Create stream\nI0511 19:29:59.127868 1438 log.go:172] (0xc000b5d3f0) (0xc00096a500) Stream added, broadcasting: 3\nI0511 19:29:59.129055 1438 log.go:172] (0xc000b5d3f0) Reply frame received for 3\nI0511 19:29:59.129093 1438 log.go:172] (0xc000b5d3f0) (0xc000b8e460) Create stream\nI0511 19:29:59.129108 1438 log.go:172] (0xc000b5d3f0) (0xc000b8e460) Stream added, broadcasting: 5\nI0511 19:29:59.130258 1438 log.go:172] (0xc000b5d3f0) Reply frame received for 5\nI0511 19:29:59.196706 1438 log.go:172] (0xc000b5d3f0) Data frame received for 5\nI0511 19:29:59.196734 1438 log.go:172] (0xc000b8e460) (5) Data frame handling\nI0511 19:29:59.196748 1438 log.go:172] (0xc000b8e460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 19:29:59.836000 1438 log.go:172] (0xc000b5d3f0) Data frame received for 3\nI0511 19:29:59.836031 1438 log.go:172] (0xc00096a500) (3) Data frame handling\nI0511 19:29:59.836051 1438 log.go:172] (0xc00096a500) (3) Data frame sent\nI0511 19:29:59.836669 1438 log.go:172] (0xc000b5d3f0) Data frame received for 3\nI0511 19:29:59.836754 1438 log.go:172] (0xc00096a500) (3) Data frame handling\nI0511 19:29:59.836832 1438 log.go:172] (0xc000b5d3f0) Data frame received for 5\nI0511 19:29:59.836848 1438 log.go:172] (0xc000b8e460) (5) Data frame handling\nI0511 19:29:59.839388 1438 log.go:172] (0xc000b5d3f0) Data frame received for 1\nI0511 19:29:59.839419 1438 log.go:172] (0xc000b268c0) (1) Data frame handling\nI0511 19:29:59.839438 1438 log.go:172] (0xc000b268c0) (1) Data frame sent\nI0511 19:29:59.839452 1438 log.go:172] (0xc000b5d3f0) (0xc000b268c0) Stream removed, broadcasting: 1\nI0511 19:29:59.839628 1438 log.go:172] (0xc000b5d3f0) Go away received\nI0511 19:29:59.839737 1438 log.go:172] (0xc000b5d3f0) (0xc000b268c0) Stream removed, broadcasting: 1\nI0511 19:29:59.839752 1438 log.go:172] (0xc000b5d3f0) (0xc00096a500) Stream removed, broadcasting: 3\nI0511 19:29:59.839762 1438 log.go:172] (0xc000b5d3f0) (0xc000b8e460) Stream removed, broadcasting: 5\n" May 11 19:29:59.843: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 19:29:59.843: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 19:29:59.892: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 19:30:10.129: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 19:30:10.129: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:30:11.011: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:11.011: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:47 +0000 UTC }] May 11 19:30:11.011: INFO: May 11 19:30:11.011: INFO: StatefulSet ss has not reached scale 3, at 1 May 11 19:30:12.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.381344657s May 11 19:30:13.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.011109632s May 11 19:30:14.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.820027995s May 11 19:30:16.442: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.530668136s May 11 19:30:17.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950216437s May 11 19:30:18.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.578956121s May 11 19:30:19.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 507.281212ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5916 May 11 19:30:20.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:30:21.137: INFO: stderr: "I0511 19:30:21.040099 1458 log.go:172] (0xc000a83600) (0xc000a48500) Create stream\nI0511 19:30:21.040142 1458 log.go:172] (0xc000a83600) (0xc000a48500) Stream added, broadcasting: 1\nI0511 19:30:21.043561 1458 log.go:172] (0xc000a83600) Reply frame received for 1\nI0511 19:30:21.043608 1458 log.go:172] (0xc000a83600) (0xc0006e2f00) Create stream\nI0511 19:30:21.043628 1458 log.go:172] (0xc000a83600) (0xc0006e2f00) Stream added, broadcasting: 3\nI0511 19:30:21.044365 1458 log.go:172] (0xc000a83600) Reply frame received for 3\nI0511 19:30:21.044398 1458 log.go:172] (0xc000a83600) (0xc0006225a0) Create stream\nI0511 19:30:21.044408 1458 log.go:172] (0xc000a83600) (0xc0006225a0) Stream added, broadcasting: 5\nI0511 19:30:21.045034 1458 log.go:172] (0xc000a83600) Reply frame received for 5\nI0511 19:30:21.130313 1458 log.go:172] (0xc000a83600) Data frame received for 3\nI0511 19:30:21.130341 1458 log.go:172] (0xc0006e2f00) (3) Data frame handling\nI0511 19:30:21.130360 1458 log.go:172] (0xc0006e2f00) (3) Data frame sent\nI0511 19:30:21.130371 1458 log.go:172] (0xc000a83600) Data frame received for 3\nI0511 19:30:21.130381 1458 log.go:172] (0xc0006e2f00) (3) Data frame handling\nI0511 19:30:21.130608 1458 log.go:172] (0xc000a83600) Data frame received for 5\nI0511 19:30:21.130653 1458 log.go:172] (0xc0006225a0) (5) Data frame handling\nI0511 19:30:21.130681 1458 log.go:172] (0xc0006225a0) (5) Data frame sent\nI0511 19:30:21.130701 1458 log.go:172] (0xc000a83600) Data frame received for 5\nI0511 19:30:21.130729 1458 log.go:172] (0xc0006225a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 19:30:21.132054 1458 log.go:172] (0xc000a83600) Data frame received for 1\nI0511 19:30:21.132068 1458 log.go:172] (0xc000a48500) (1) Data frame handling\nI0511 19:30:21.132077 1458 log.go:172] (0xc000a48500) (1) Data frame sent\nI0511 19:30:21.132086 1458 log.go:172] (0xc000a83600) (0xc000a48500) Stream removed, broadcasting: 1\nI0511 19:30:21.132100 1458 log.go:172] (0xc000a83600) Go away received\nI0511 19:30:21.132525 1458 log.go:172] (0xc000a83600) (0xc000a48500) Stream removed, broadcasting: 1\nI0511 19:30:21.132546 1458 log.go:172] (0xc000a83600) (0xc0006e2f00) Stream removed, broadcasting: 3\nI0511 19:30:21.132557 1458 log.go:172] (0xc000a83600) (0xc0006225a0) Stream removed, broadcasting: 5\n" May 11 19:30:21.137: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 19:30:21.137: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 19:30:21.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:30:21.360: INFO: stderr: "I0511 19:30:21.291527 1478 log.go:172] (0xc0009bd970) (0xc000b22500) Create stream\nI0511 19:30:21.291568 1478 log.go:172] (0xc0009bd970) (0xc000b22500) Stream added, broadcasting: 1\nI0511 19:30:21.296104 1478 log.go:172] (0xc0009bd970) Reply frame received for 1\nI0511 19:30:21.296170 1478 log.go:172] (0xc0009bd970) (0xc0007ecf00) Create stream\nI0511 19:30:21.296183 1478 log.go:172] (0xc0009bd970) (0xc0007ecf00) Stream added, broadcasting: 3\nI0511 19:30:21.297635 1478 log.go:172] (0xc0009bd970) Reply frame received for 3\nI0511 19:30:21.297706 1478 log.go:172] (0xc0009bd970) (0xc0006e25a0) Create stream\nI0511 19:30:21.297737 1478 log.go:172] (0xc0009bd970) (0xc0006e25a0) Stream added, broadcasting: 5\nI0511 19:30:21.298815 1478 log.go:172] (0xc0009bd970) Reply frame received for 5\nI0511 19:30:21.354048 1478 log.go:172] (0xc0009bd970) Data frame received for 3\nI0511 19:30:21.354189 1478 log.go:172] (0xc0007ecf00) (3) Data frame handling\nI0511 19:30:21.354278 1478 log.go:172] (0xc0007ecf00) (3) Data frame sent\nI0511 19:30:21.354321 1478 log.go:172] (0xc0009bd970) Data frame received for 5\nI0511 19:30:21.354360 1478 log.go:172] (0xc0006e25a0) (5) Data frame handling\nI0511 19:30:21.354391 1478 log.go:172] (0xc0006e25a0) (5) Data frame sent\nI0511 19:30:21.354415 1478 log.go:172] (0xc0009bd970) Data frame received for 5\nI0511 19:30:21.354426 1478 log.go:172] (0xc0006e25a0) (5) Data frame handling\nI0511 19:30:21.354442 1478 log.go:172] (0xc0009bd970) Data frame received for 3\nI0511 19:30:21.354463 1478 log.go:172] (0xc0007ecf00) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 19:30:21.355683 1478 log.go:172] (0xc0009bd970) Data frame received for 1\nI0511 19:30:21.355704 1478 log.go:172] (0xc000b22500) (1) Data frame handling\nI0511 19:30:21.355720 1478 log.go:172] (0xc000b22500) (1) Data frame sent\nI0511 19:30:21.355882 1478 log.go:172] (0xc0009bd970) (0xc000b22500) Stream removed, broadcasting: 1\nI0511 19:30:21.355936 1478 log.go:172] (0xc0009bd970) Go away received\nI0511 19:30:21.356284 1478 log.go:172] (0xc0009bd970) (0xc000b22500) Stream removed, broadcasting: 1\nI0511 19:30:21.356311 1478 log.go:172] (0xc0009bd970) (0xc0007ecf00) Stream removed, broadcasting: 3\nI0511 19:30:21.356331 1478 log.go:172] (0xc0009bd970) (0xc0006e25a0) Stream removed, broadcasting: 5\n" May 11 19:30:21.360: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 19:30:21.360: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 19:30:21.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:30:21.584: INFO: stderr: "I0511 19:30:21.516499 1500 log.go:172] (0xc000980c60) (0xc000915180) Create stream\nI0511 19:30:21.516535 1500 log.go:172] (0xc000980c60) (0xc000915180) Stream added, broadcasting: 1\nI0511 19:30:21.517884 1500 log.go:172] (0xc000980c60) Reply frame received for 1\nI0511 19:30:21.517913 1500 log.go:172] (0xc000980c60) (0xc000907180) Create stream\nI0511 19:30:21.517923 1500 log.go:172] (0xc000980c60) (0xc000907180) Stream added, broadcasting: 3\nI0511 19:30:21.518580 1500 log.go:172] (0xc000980c60) Reply frame received for 3\nI0511 19:30:21.518602 1500 log.go:172] (0xc000980c60) (0xc000900960) Create stream\nI0511 19:30:21.518609 1500 log.go:172] (0xc000980c60) (0xc000900960) Stream added, broadcasting: 5\nI0511 19:30:21.519194 1500 log.go:172] (0xc000980c60) Reply frame received for 5\nI0511 19:30:21.578317 1500 log.go:172] (0xc000980c60) Data frame received for 3\nI0511 19:30:21.578343 1500 log.go:172] (0xc000907180) (3) Data frame handling\nI0511 19:30:21.578378 1500 log.go:172] (0xc000907180) (3) Data frame sent\nI0511 19:30:21.578391 1500 log.go:172] (0xc000980c60) Data frame received for 3\nI0511 19:30:21.578399 1500 log.go:172] (0xc000907180) (3) Data frame handling\nI0511 19:30:21.578408 1500 log.go:172] (0xc000980c60) Data frame received for 5\nI0511 19:30:21.578415 1500 log.go:172] (0xc000900960) (5) Data frame handling\nI0511 19:30:21.578422 1500 log.go:172] (0xc000900960) (5) Data frame sent\nI0511 19:30:21.578431 1500 log.go:172] (0xc000980c60) Data frame received for 5\nI0511 19:30:21.578449 1500 log.go:172] (0xc000900960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 19:30:21.579949 1500 log.go:172] (0xc000980c60) Data frame received for 1\nI0511 19:30:21.579962 1500 log.go:172] (0xc000915180) (1) Data frame handling\nI0511 19:30:21.579968 1500 log.go:172] (0xc000915180) (1) Data frame sent\nI0511 19:30:21.579975 1500 log.go:172] (0xc000980c60) (0xc000915180) Stream removed, broadcasting: 1\nI0511 19:30:21.580085 1500 log.go:172] (0xc000980c60) Go away received\nI0511 19:30:21.580275 1500 log.go:172] (0xc000980c60) (0xc000915180) Stream removed, broadcasting: 1\nI0511 19:30:21.580292 1500 log.go:172] (0xc000980c60) (0xc000907180) Stream removed, broadcasting: 3\nI0511 19:30:21.580301 1500 log.go:172] (0xc000980c60) (0xc000900960) Stream removed, broadcasting: 5\n" May 11 19:30:21.584: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 19:30:21.584: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 19:30:21.587: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 19:30:21.587: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 19:30:21.587: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 11 19:30:21.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 19:30:21.792: INFO: stderr: "I0511 19:30:21.712488 1519 log.go:172] (0xc00003a4d0) (0xc00049a1e0) Create stream\nI0511 19:30:21.712524 1519 log.go:172] (0xc00003a4d0) (0xc00049a1e0) Stream added, broadcasting: 1\nI0511 19:30:21.719897 1519 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0511 19:30:21.719951 1519 log.go:172] (0xc00003a4d0) (0xc000422dc0) Create stream\nI0511 19:30:21.719967 1519 log.go:172] (0xc00003a4d0) (0xc000422dc0) Stream added, broadcasting: 3\nI0511 19:30:21.721260 1519 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0511 19:30:21.721298 1519 log.go:172] (0xc00003a4d0) (0xc00049a960) Create stream\nI0511 19:30:21.721314 1519 log.go:172] (0xc00003a4d0) (0xc00049a960) Stream added, broadcasting: 5\nI0511 19:30:21.722136 1519 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0511 19:30:21.786548 1519 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0511 19:30:21.786590 1519 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0511 19:30:21.786623 1519 log.go:172] (0xc000422dc0) (3) Data frame handling\nI0511 19:30:21.786645 1519 log.go:172] (0xc000422dc0) (3) Data frame sent\nI0511 19:30:21.786654 1519 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0511 19:30:21.786671 1519 log.go:172] (0xc000422dc0) (3) Data frame handling\nI0511 19:30:21.786700 1519 log.go:172] (0xc00049a960) (5) Data frame handling\nI0511 19:30:21.786712 1519 log.go:172] (0xc00049a960) (5) Data frame sent\nI0511 19:30:21.786726 1519 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0511 19:30:21.786737 1519 log.go:172] (0xc00049a960) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 19:30:21.787746 1519 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0511 19:30:21.787757 1519 log.go:172] (0xc00049a1e0) (1) Data frame handling\nI0511 19:30:21.787764 1519 log.go:172] (0xc00049a1e0) (1) Data frame sent\nI0511 19:30:21.787773 1519 log.go:172] (0xc00003a4d0) (0xc00049a1e0) Stream removed, broadcasting: 1\nI0511 19:30:21.787787 1519 log.go:172] (0xc00003a4d0) Go away received\nI0511 19:30:21.788056 1519 log.go:172] (0xc00003a4d0) (0xc00049a1e0) Stream removed, broadcasting: 1\nI0511 19:30:21.788072 1519 log.go:172] (0xc00003a4d0) (0xc000422dc0) Stream removed, broadcasting: 3\nI0511 19:30:21.788094 1519 log.go:172] (0xc00003a4d0) (0xc00049a960) Stream removed, broadcasting: 5\n" May 11 19:30:21.792: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 19:30:21.792: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 19:30:21.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 19:30:22.101: INFO: stderr: "I0511 19:30:21.910741 1539 log.go:172] (0xc000af0000) (0xc00016c640) Create stream\nI0511 19:30:21.910784 1539 log.go:172] (0xc000af0000) (0xc00016c640) Stream added, broadcasting: 1\nI0511 19:30:21.912842 1539 log.go:172] (0xc000af0000) Reply frame received for 1\nI0511 19:30:21.912884 1539 log.go:172] (0xc000af0000) (0xc0001394a0) Create stream\nI0511 19:30:21.912903 1539 log.go:172] (0xc000af0000) (0xc0001394a0) Stream added, broadcasting: 3\nI0511 19:30:21.913967 1539 log.go:172] (0xc000af0000) Reply frame received for 3\nI0511 19:30:21.914017 1539 log.go:172] (0xc000af0000) (0xc000239540) Create stream\nI0511 19:30:21.914038 1539 log.go:172] (0xc000af0000) (0xc000239540) Stream added, broadcasting: 5\nI0511 19:30:21.914718 1539 log.go:172] (0xc000af0000) Reply frame received for 5\nI0511 19:30:22.062922 1539 log.go:172] (0xc000af0000) Data frame received for 5\nI0511 19:30:22.062967 1539 log.go:172] (0xc000239540) (5) Data frame handling\nI0511 19:30:22.062995 1539 log.go:172] (0xc000239540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 19:30:22.094334 1539 log.go:172] (0xc000af0000) Data frame received for 3\nI0511 19:30:22.094374 1539 log.go:172] (0xc0001394a0) (3) Data frame handling\nI0511 19:30:22.094414 1539 log.go:172] (0xc0001394a0) (3) Data frame sent\nI0511 19:30:22.094439 1539 log.go:172] (0xc000af0000) Data frame received for 3\nI0511 19:30:22.094460 1539 log.go:172] (0xc0001394a0) (3) Data frame handling\nI0511 19:30:22.094640 1539 log.go:172] (0xc000af0000) Data frame received for 5\nI0511 19:30:22.094661 1539 log.go:172] (0xc000239540) (5) Data frame handling\nI0511 19:30:22.095779 1539 log.go:172] (0xc000af0000) Data frame received for 1\nI0511 19:30:22.095823 1539 log.go:172] (0xc00016c640) (1) Data frame handling\nI0511 19:30:22.095878 1539 log.go:172] (0xc00016c640) (1) Data frame sent\nI0511 19:30:22.095915 1539 log.go:172] (0xc000af0000) (0xc00016c640) Stream removed, broadcasting: 1\nI0511 19:30:22.095948 1539 log.go:172] (0xc000af0000) Go away received\nI0511 19:30:22.096415 1539 log.go:172] (0xc000af0000) (0xc00016c640) Stream removed, broadcasting: 1\nI0511 19:30:22.096437 1539 log.go:172] (0xc000af0000) (0xc0001394a0) Stream removed, broadcasting: 3\nI0511 19:30:22.096454 1539 log.go:172] (0xc000af0000) (0xc000239540) Stream removed, broadcasting: 5\n" May 11 19:30:22.101: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 19:30:22.101: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 19:30:22.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 19:30:22.780: INFO: stderr: "I0511 19:30:22.278215 1558 log.go:172] (0xc00003a160) (0xc00051d720) Create stream\nI0511 19:30:22.278265 1558 log.go:172] (0xc00003a160) (0xc00051d720) Stream added, broadcasting: 1\nI0511 19:30:22.279983 1558 log.go:172] (0xc00003a160) Reply frame received for 1\nI0511 19:30:22.280020 1558 log.go:172] (0xc00003a160) (0xc00035d680) Create stream\nI0511 19:30:22.280040 1558 log.go:172] (0xc00003a160) (0xc00035d680) Stream added, broadcasting: 3\nI0511 19:30:22.280775 1558 log.go:172] (0xc00003a160) Reply frame received for 3\nI0511 19:30:22.280803 1558 log.go:172] (0xc00003a160) (0xc0000f30e0) Create stream\nI0511 19:30:22.280814 1558 log.go:172] (0xc00003a160) (0xc0000f30e0) Stream added, broadcasting: 5\nI0511 19:30:22.281606 1558 log.go:172] (0xc00003a160) Reply frame received for 5\nI0511 19:30:22.332423 1558 log.go:172] (0xc00003a160) Data frame received for 5\nI0511 19:30:22.332451 1558 log.go:172] (0xc0000f30e0) (5) Data frame handling\nI0511 19:30:22.332471 1558 log.go:172] (0xc0000f30e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 19:30:22.774413 1558 log.go:172] (0xc00003a160) Data frame received for 3\nI0511 19:30:22.774460 1558 log.go:172] (0xc00035d680) (3) Data frame handling\nI0511 19:30:22.774487 1558 log.go:172] (0xc00035d680) (3) Data frame sent\nI0511 19:30:22.774510 1558 log.go:172] (0xc00003a160) Data frame received for 3\nI0511 19:30:22.774529 1558 log.go:172] (0xc00035d680) (3) Data frame handling\nI0511 19:30:22.774565 1558 log.go:172] (0xc00003a160) Data frame received for 5\nI0511 19:30:22.774608 1558 log.go:172] (0xc0000f30e0) (5) Data frame handling\nI0511 19:30:22.775780 1558 log.go:172] (0xc00003a160) Data frame received for 1\nI0511 19:30:22.775826 1558 log.go:172] (0xc00051d720) (1) Data frame handling\nI0511 19:30:22.775880 1558 log.go:172] (0xc00051d720) (1) Data frame sent\nI0511 19:30:22.776068 1558 log.go:172] (0xc00003a160) (0xc00051d720) Stream removed, broadcasting: 1\nI0511 19:30:22.776098 1558 log.go:172] (0xc00003a160) Go away received\nI0511 19:30:22.776386 1558 log.go:172] (0xc00003a160) (0xc00051d720) Stream removed, broadcasting: 1\nI0511 19:30:22.776401 1558 log.go:172] (0xc00003a160) (0xc00035d680) Stream removed, broadcasting: 3\nI0511 19:30:22.776410 1558 log.go:172] (0xc00003a160) (0xc0000f30e0) Stream removed, broadcasting: 5\n" May 11 19:30:22.780: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 19:30:22.780: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 19:30:22.780: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:30:22.863: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 11 19:30:32.872: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 19:30:32.872: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 19:30:32.872: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 19:30:32.902: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:32.902: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:47 +0000 UTC }] May 11 19:30:32.902: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:32.903: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:32.903: INFO: May 11 19:30:32.903: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:30:34.184: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:34.184: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:47 +0000 UTC }] May 11 19:30:34.184: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:34.184: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:34.184: INFO: May 11 19:30:34.184: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:30:35.328: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:35.328: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:47 +0000 UTC }] May 11 19:30:35.328: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:35.328: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:35.328: INFO: May 11 19:30:35.328: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:30:36.333: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:36.333: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:47 +0000 UTC }] May 11 19:30:36.333: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:36.333: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:36.333: INFO: May 11 19:30:36.333: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:30:37.336: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:37.336: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:29:47 +0000 UTC }] May 11 19:30:37.336: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:37.336: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:37.336: INFO: May 11 19:30:37.336: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 19:30:38.340: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:38.340: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:38.340: INFO: May 11 19:30:38.340: INFO: StatefulSet ss has not reached scale 0, at 1 May 11 19:30:39.412: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:39.412: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:39.412: INFO: May 11 19:30:39.412: INFO: StatefulSet ss has not reached scale 0, at 1 May 11 19:30:40.419: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:40.419: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:40.419: INFO: May 11 19:30:40.419: INFO: StatefulSet ss has not reached scale 0, at 1 May 11 19:30:41.765: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:41.765: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:41.765: INFO: May 11 19:30:41.765: INFO: StatefulSet ss has not reached scale 0, at 1 May 11 19:30:42.769: INFO: POD NODE PHASE GRACE CONDITIONS May 11 19:30:42.769: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 19:30:11 +0000 UTC }] May 11 19:30:42.769: INFO: May 11 19:30:42.769: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5916 May 11 19:30:43.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:30:43.913: INFO: rc: 1 May 11 19:30:43.913: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 11 19:30:53.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:30:54.020: INFO: rc: 1 May 11 19:30:54.020: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:31:04.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:31:04.122: INFO: rc: 1 May 11 19:31:04.122: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:31:14.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:31:14.364: INFO: rc: 1 May 11 19:31:14.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:31:24.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:31:24.472: INFO: rc: 1 May 11 19:31:24.472: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:31:34.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:31:34.638: INFO: rc: 1 May 11 19:31:34.638: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:31:44.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:31:44.756: INFO: rc: 1 May 11 19:31:44.756: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:31:54.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:31:54.851: INFO: rc: 1 May 11 19:31:54.851: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:32:04.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:32:05.025: INFO: rc: 1 May 11 19:32:05.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:32:15.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:32:15.160: INFO: rc: 1 May 11 19:32:15.160: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:32:25.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:32:25.584: INFO: rc: 1 May 11 19:32:25.584: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:32:35.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:32:35.680: INFO: rc: 1 May 11 19:32:35.680: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:32:45.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:32:46.459: INFO: rc: 1 May 11 19:32:46.459: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:32:56.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:32:56.668: INFO: rc: 1 May 11 19:32:56.668: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:33:06.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:33:07.367: INFO: rc: 1 May 11 19:33:07.367: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:33:17.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:33:19.015: INFO: rc: 1 May 11 19:33:19.015: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:33:29.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:33:29.652: INFO: rc: 1 May 11 19:33:29.652: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:33:39.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:33:40.026: INFO: rc: 1 May 11 19:33:40.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:33:50.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:33:50.155: INFO: rc: 1 May 11 19:33:50.155: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:34:00.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:34:00.347: INFO: rc: 1 May 11 19:34:00.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:34:10.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:34:10.988: INFO: rc: 1 May 11 19:34:10.988: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:34:20.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:34:21.411: INFO: rc: 1 May 11 19:34:21.412: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:34:31.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:34:31.513: INFO: rc: 1 May 11 19:34:31.513: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:34:41.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:34:41.607: INFO: rc: 1 May 11 19:34:41.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:34:51.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:34:52.858: INFO: rc: 1 May 11 19:34:52.858: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:35:02.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:35:03.449: INFO: rc: 1 May 11 19:35:03.449: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:35:13.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:35:13.577: INFO: rc: 1 May 11 19:35:13.577: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:35:23.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:35:23.686: INFO: rc: 1 May 11 19:35:23.686: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:35:33.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:35:33.974: INFO: rc: 1 May 11 19:35:33.974: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 11 19:35:43.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5916 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:35:44.085: INFO: rc: 1 May 11 19:35:44.085: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: May 11 19:35:44.085: INFO: Scaling statefulset ss to 0 May 11 19:35:44.091: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 19:35:44.093: INFO: Deleting all statefulset in ns statefulset-5916 May 11 19:35:44.095: INFO: Scaling statefulset ss to 0 May 11 19:35:44.101: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:35:44.103: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:35:44.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5916" for this suite. • [SLOW TEST:359.045 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":126,"skipped":2028,"failed":0} S ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:35:44.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8721 STEP: creating service affinity-clusterip in namespace services-8721 STEP: creating replication controller affinity-clusterip in namespace services-8721 I0511 19:35:44.306137 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8721, replica count: 3 I0511 19:35:47.356477 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:35:50.356757 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:35:53.356989 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 19:35:54.097: INFO: Creating new exec pod May 11 19:36:01.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8721 execpod-affinitycmhhd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 11 19:36:02.496: INFO: stderr: "I0511 19:36:02.436649 2165 log.go:172] (0xc0009d5130) (0xc000aac280) Create stream\nI0511 19:36:02.436710 2165 log.go:172] (0xc0009d5130) (0xc000aac280) Stream added, broadcasting: 1\nI0511 19:36:02.440672 2165 log.go:172] (0xc0009d5130) Reply frame received for 1\nI0511 19:36:02.440725 2165 log.go:172] (0xc0009d5130) (0xc000662dc0) Create stream\nI0511 19:36:02.440740 2165 log.go:172] (0xc0009d5130) (0xc000662dc0) Stream added, broadcasting: 3\nI0511 19:36:02.442233 2165 log.go:172] (0xc0009d5130) Reply frame received for 3\nI0511 19:36:02.442275 2165 log.go:172] (0xc0009d5130) (0xc0004b4e60) Create stream\nI0511 19:36:02.442289 2165 log.go:172] (0xc0009d5130) (0xc0004b4e60) Stream added, broadcasting: 5\nI0511 19:36:02.444877 2165 log.go:172] (0xc0009d5130) Reply frame received for 5\nI0511 19:36:02.490935 2165 log.go:172] (0xc0009d5130) Data frame received for 5\nI0511 19:36:02.490959 2165 log.go:172] (0xc0004b4e60) (5) Data frame handling\nI0511 19:36:02.490979 2165 log.go:172] (0xc0004b4e60) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0511 19:36:02.491199 2165 log.go:172] (0xc0009d5130) Data frame received for 5\nI0511 19:36:02.491214 2165 log.go:172] (0xc0004b4e60) (5) Data frame handling\nI0511 19:36:02.491223 2165 log.go:172] (0xc0004b4e60) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0511 19:36:02.491498 2165 log.go:172] (0xc0009d5130) Data frame received for 3\nI0511 19:36:02.491512 2165 log.go:172] (0xc000662dc0) (3) Data frame handling\nI0511 19:36:02.491532 2165 log.go:172] (0xc0009d5130) Data frame received for 5\nI0511 19:36:02.491549 2165 log.go:172] (0xc0004b4e60) (5) Data frame handling\nI0511 19:36:02.492640 2165 log.go:172] (0xc0009d5130) Data frame received for 1\nI0511 19:36:02.492657 2165 log.go:172] (0xc000aac280) (1) Data frame handling\nI0511 19:36:02.492667 2165 log.go:172] (0xc000aac280) (1) Data frame sent\nI0511 19:36:02.492683 2165 log.go:172] (0xc0009d5130) (0xc000aac280) Stream removed, broadcasting: 1\nI0511 19:36:02.492699 2165 log.go:172] (0xc0009d5130) Go away received\nI0511 19:36:02.492959 2165 log.go:172] (0xc0009d5130) (0xc000aac280) Stream removed, broadcasting: 1\nI0511 19:36:02.492970 2165 log.go:172] (0xc0009d5130) (0xc000662dc0) Stream removed, broadcasting: 3\nI0511 19:36:02.492976 2165 log.go:172] (0xc0009d5130) (0xc0004b4e60) Stream removed, broadcasting: 5\n" May 11 19:36:02.497: INFO: stdout: "" May 11 19:36:02.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8721 execpod-affinitycmhhd -- /bin/sh -x -c nc -zv -t -w 2 10.98.219.156 80' May 11 19:36:02.688: INFO: stderr: "I0511 19:36:02.619247 2184 log.go:172] (0xc0000cc370) (0xc000996f00) Create stream\nI0511 19:36:02.619291 2184 log.go:172] (0xc0000cc370) (0xc000996f00) Stream added, broadcasting: 1\nI0511 19:36:02.620495 2184 log.go:172] (0xc0000cc370) Reply frame received for 1\nI0511 19:36:02.620521 2184 log.go:172] (0xc0000cc370) (0xc0009a7720) Create stream\nI0511 19:36:02.620528 2184 log.go:172] (0xc0000cc370) (0xc0009a7720) Stream added, broadcasting: 3\nI0511 19:36:02.621677 2184 log.go:172] (0xc0000cc370) Reply frame received for 3\nI0511 19:36:02.621724 2184 log.go:172] (0xc0000cc370) (0xc0009a7e00) Create stream\nI0511 19:36:02.621737 2184 log.go:172] (0xc0000cc370) (0xc0009a7e00) Stream added, broadcasting: 5\nI0511 19:36:02.622388 2184 log.go:172] (0xc0000cc370) Reply frame received for 5\nI0511 19:36:02.682923 2184 log.go:172] (0xc0000cc370) Data frame received for 3\nI0511 19:36:02.682954 2184 log.go:172] (0xc0009a7720) (3) Data frame handling\nI0511 19:36:02.682977 2184 log.go:172] (0xc0000cc370) Data frame received for 5\nI0511 19:36:02.682988 2184 log.go:172] (0xc0009a7e00) (5) Data frame handling\nI0511 19:36:02.683001 2184 log.go:172] (0xc0009a7e00) (5) Data frame sent\nI0511 19:36:02.683014 2184 log.go:172] (0xc0000cc370) Data frame received for 5\n+ nc -zv -t -w 2 10.98.219.156 80\nConnection to 10.98.219.156 80 port [tcp/http] succeeded!\nI0511 19:36:02.683025 2184 log.go:172] (0xc0009a7e00) (5) Data frame handling\nI0511 19:36:02.684298 2184 log.go:172] (0xc0000cc370) Data frame received for 1\nI0511 19:36:02.684321 2184 log.go:172] (0xc000996f00) (1) Data frame handling\nI0511 19:36:02.684337 2184 log.go:172] (0xc000996f00) (1) Data frame sent\nI0511 19:36:02.684359 2184 log.go:172] (0xc0000cc370) (0xc000996f00) Stream removed, broadcasting: 1\nI0511 19:36:02.684712 2184 log.go:172] (0xc0000cc370) (0xc000996f00) Stream removed, broadcasting: 1\nI0511 19:36:02.684732 2184 log.go:172] (0xc0000cc370) (0xc0009a7720) Stream removed, broadcasting: 3\nI0511 19:36:02.684852 2184 log.go:172] (0xc0000cc370) (0xc0009a7e00) Stream removed, broadcasting: 5\n" May 11 19:36:02.689: INFO: stdout: "" May 11 19:36:02.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8721 execpod-affinitycmhhd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.219.156:80/ ; done' May 11 19:36:02.963: INFO: stderr: "I0511 19:36:02.809104 2204 log.go:172] (0xc000acfa20) (0xc000758f00) Create stream\nI0511 19:36:02.809282 2204 log.go:172] (0xc000acfa20) (0xc000758f00) Stream added, broadcasting: 1\nI0511 19:36:02.811946 2204 log.go:172] (0xc000acfa20) Reply frame received for 1\nI0511 19:36:02.812014 2204 log.go:172] (0xc000acfa20) (0xc0007994a0) Create stream\nI0511 19:36:02.812041 2204 log.go:172] (0xc000acfa20) (0xc0007994a0) Stream added, broadcasting: 3\nI0511 19:36:02.813673 2204 log.go:172] (0xc000acfa20) Reply frame received for 3\nI0511 19:36:02.813717 2204 log.go:172] (0xc000acfa20) (0xc000456aa0) Create stream\nI0511 19:36:02.813733 2204 log.go:172] (0xc000acfa20) (0xc000456aa0) Stream added, broadcasting: 5\nI0511 19:36:02.814563 2204 log.go:172] (0xc000acfa20) Reply frame received for 5\nI0511 19:36:02.873971 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.874000 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.874010 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.874024 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.874029 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.874035 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.877459 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.877477 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.877503 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.877987 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.878000 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.878012 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0511 19:36:02.878067 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.878079 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.878090 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n http://10.98.219.156:80/\nI0511 19:36:02.878118 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.878133 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.878141 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.885510 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.885581 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.885608 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.885971 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.886002 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.886022 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.886087 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.886106 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.886126 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.889846 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.890003 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.890108 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.890556 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.890596 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.890617 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\nI0511 19:36:02.890638 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.890649 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.890689 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.890713 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.890732 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\n+ echo\nI0511 19:36:02.890821 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\nI0511 19:36:02.890843 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.890862 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.890881 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.895968 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.895997 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.896024 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.896235 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.896247 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.896261 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.896282 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.896293 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.896307 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.903438 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.903457 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.903465 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.904250 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.904302 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.904342 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.904361 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.904375 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.904382 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.908212 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.908237 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.908270 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.908704 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.908724 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.908734 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.908746 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.908752 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.908759 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.914960 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.914987 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.915013 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.915738 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.915759 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.915770 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.915811 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.915854 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.915875 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.920368 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.920381 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.920387 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.920836 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.920848 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.920853 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\nI0511 19:36:02.920858 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.920863 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.920873 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\nI0511 19:36:02.920878 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.920882 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.920888 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.924856 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.924873 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.924887 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.925568 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.925587 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.925609 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.925752 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.925771 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.925791 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.928587 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.928600 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.928610 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.929835 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.929858 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.929869 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.929886 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.929895 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.929904 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.933039 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.933061 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.933080 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.934000 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.934015 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.934027 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.934041 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.934048 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.934056 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.937463 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.937479 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.937496 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.937922 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.937936 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.937953 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.937980 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.937998 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.938014 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.940946 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.940986 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.941014 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.941776 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.941800 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.941815 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.941837 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.941851 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.941873 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.944964 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.945001 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.945032 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.945488 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.945523 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.945537 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.945556 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.945568 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.945579 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.948666 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.948686 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.948703 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.949318 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.949352 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.949365 2204 log.go:172] (0xc000456aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.219.156:80/\nI0511 19:36:02.949383 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.949397 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.949407 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.956491 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.956519 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.956542 2204 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0511 19:36:02.956942 2204 log.go:172] (0xc000acfa20) Data frame received for 5\nI0511 19:36:02.956957 2204 log.go:172] (0xc000456aa0) (5) Data frame handling\nI0511 19:36:02.957231 2204 log.go:172] (0xc000acfa20) Data frame received for 3\nI0511 19:36:02.957248 2204 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0511 19:36:02.958896 2204 log.go:172] (0xc000acfa20) Data frame received for 1\nI0511 19:36:02.958907 2204 log.go:172] (0xc000758f00) (1) Data frame handling\nI0511 19:36:02.958917 2204 log.go:172] (0xc000758f00) (1) Data frame sent\nI0511 19:36:02.958930 2204 log.go:172] (0xc000acfa20) (0xc000758f00) Stream removed, broadcasting: 1\nI0511 19:36:02.959090 2204 log.go:172] (0xc000acfa20) Go away received\nI0511 19:36:02.959160 2204 log.go:172] (0xc000acfa20) (0xc000758f00) Stream removed, broadcasting: 1\nI0511 19:36:02.959172 2204 log.go:172] (0xc000acfa20) (0xc0007994a0) Stream removed, broadcasting: 3\nI0511 19:36:02.959189 2204 log.go:172] (0xc000acfa20) (0xc000456aa0) Stream removed, broadcasting: 5\n" May 11 19:36:02.964: INFO: stdout: "\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6\naffinity-clusterip-rggt6" May 11 19:36:02.964: INFO: Received response from host: May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Received response from host: affinity-clusterip-rggt6 May 11 19:36:02.964: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8721, will wait for the garbage collector to delete the pods May 11 19:36:04.258: INFO: Deleting ReplicationController affinity-clusterip took: 219.245662ms May 11 19:36:05.358: INFO: Terminating ReplicationController affinity-clusterip pods took: 1.100256389s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:36:25.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8721" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:41.459 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":127,"skipped":2029,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:36:25.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:36:25.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f" in namespace "projected-4367" to be "Succeeded or Failed" May 11 19:36:25.679: INFO: Pod "downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.969311ms May 11 19:36:28.270: INFO: Pod "downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.59352939s May 11 19:36:30.311: INFO: Pod "downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.634833896s May 11 19:36:33.142: INFO: Pod "downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.465584171s STEP: Saw pod success May 11 19:36:33.142: INFO: Pod "downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f" satisfied condition "Succeeded or Failed" May 11 19:36:33.437: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f container client-container: STEP: delete the pod May 11 19:36:33.870: INFO: Waiting for pod downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f to disappear May 11 19:36:34.054: INFO: Pod downwardapi-volume-f7ad1763-657a-4638-a9db-0480a9c2cb1f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:36:34.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4367" for this suite. • [SLOW TEST:8.938 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":128,"skipped":2041,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:36:34.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 19:36:47.683: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 19:36:47.735: INFO: Pod pod-with-poststart-http-hook still exists May 11 19:36:49.735: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 19:36:49.738: INFO: Pod pod-with-poststart-http-hook still exists May 11 19:36:51.735: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 19:36:51.739: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:36:51.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8745" for this suite. • [SLOW TEST:17.214 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2048,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:36:51.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:36:52.730: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:36:54.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822612, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822612, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822612, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724822612, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:36:57.965: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:36:58.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2085" for this suite. STEP: Destroying namespace "webhook-2085-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.692 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":130,"skipped":2056,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:37:00.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 11 19:37:09.407: INFO: Successfully updated pod "adopt-release-bkm4l" STEP: Checking that the Job readopts the Pod May 11 19:37:09.407: INFO: Waiting up to 15m0s for pod "adopt-release-bkm4l" in namespace "job-16" to be "adopted" May 11 19:37:09.418: INFO: Pod "adopt-release-bkm4l": Phase="Running", Reason="", readiness=true. Elapsed: 11.161005ms May 11 19:37:11.423: INFO: Pod "adopt-release-bkm4l": Phase="Running", Reason="", readiness=true. Elapsed: 2.015354023s May 11 19:37:11.423: INFO: Pod "adopt-release-bkm4l" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 11 19:37:11.933: INFO: Successfully updated pod "adopt-release-bkm4l" STEP: Checking that the Job releases the Pod May 11 19:37:11.933: INFO: Waiting up to 15m0s for pod "adopt-release-bkm4l" in namespace "job-16" to be "released" May 11 19:37:11.994: INFO: Pod "adopt-release-bkm4l": Phase="Running", Reason="", readiness=true. Elapsed: 60.85542ms May 11 19:37:13.998: INFO: Pod "adopt-release-bkm4l": Phase="Running", Reason="", readiness=true. Elapsed: 2.064939563s May 11 19:37:13.998: INFO: Pod "adopt-release-bkm4l" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:37:13.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-16" for this suite. • [SLOW TEST:13.562 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":131,"skipped":2064,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:37:14.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:37:14.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492" in namespace "projected-1917" to be "Succeeded or Failed" May 11 19:37:14.803: INFO: Pod "downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492": Phase="Pending", Reason="", readiness=false. Elapsed: 46.616196ms May 11 19:37:16.807: INFO: Pod "downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051061266s May 11 19:37:18.811: INFO: Pod "downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054514663s May 11 19:37:20.816: INFO: Pod "downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060088036s STEP: Saw pod success May 11 19:37:20.816: INFO: Pod "downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492" satisfied condition "Succeeded or Failed" May 11 19:37:20.819: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492 container client-container: STEP: delete the pod May 11 19:37:20.880: INFO: Waiting for pod downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492 to disappear May 11 19:37:20.894: INFO: Pod downwardapi-volume-40679443-a82b-4aef-8529-1ccb44eaa492 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:37:20.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1917" for this suite. • [SLOW TEST:6.919 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2066,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:37:20.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 11 19:37:21.015: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 19:37:21.037: INFO: Waiting for terminating namespaces to be deleted... May 11 19:37:21.039: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 11 19:37:21.043: INFO: rally-7f047e53-ekyvxn8c-lwbqj from c-rally-7f047e53-lpyp5jpf started at 2020-05-11 19:36:33 +0000 UTC (1 container statuses recorded) May 11 19:37:21.043: INFO: Container rally-7f047e53-ekyvxn8c ready: false, restart count 0 May 11 19:37:21.043: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 11 19:37:21.043: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 11 19:37:21.043: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 11 19:37:21.043: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 11 19:37:21.043: INFO: adopt-release-bkm4l from job-16 started at 2020-05-11 19:37:01 +0000 UTC (1 container statuses recorded) May 11 19:37:21.043: INFO: Container c ready: true, restart count 0 May 11 19:37:21.043: INFO: adopt-release-ksjsx from job-16 started at 2020-05-11 19:37:01 +0000 UTC (1 container statuses recorded) May 11 19:37:21.043: INFO: Container c ready: true, restart count 0 May 11 19:37:21.043: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 19:37:21.043: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:37:21.043: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 19:37:21.043: INFO: Container kube-proxy ready: true, restart count 0 May 11 19:37:21.043: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 11 19:37:21.048: INFO: rally-7f047e53-6nef2pw1-5w5kv from c-rally-7f047e53-lpyp5jpf started at 2020-05-11 19:36:53 +0000 UTC (1 container statuses recorded) May 11 19:37:21.048: INFO: Container rally-7f047e53-6nef2pw1 ready: false, restart count 0 May 11 19:37:21.048: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 11 19:37:21.048: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 11 19:37:21.048: INFO: adopt-release-4wm4m from job-16 started at 2020-05-11 19:37:12 +0000 UTC (1 container statuses recorded) May 11 19:37:21.048: INFO: Container c ready: true, restart count 0 May 11 19:37:21.048: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 19:37:21.048: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:37:21.048: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 19:37:21.048: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8572e289-7c1f-411b-8d0e-b7146fbf2a08 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-8572e289-7c1f-411b-8d0e-b7146fbf2a08 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8572e289-7c1f-411b-8d0e-b7146fbf2a08 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:42:29.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7193" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.556 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":133,"skipped":2073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:42:29.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9130.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9130.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 19:42:38.419: INFO: DNS probes using dns-9130/dns-test-ab7139b1-be3a-4b7e-8988-aa5f261d83a4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:42:38.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9130" for this suite. • [SLOW TEST:9.717 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":134,"skipped":2120,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:42:39.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9dbdaa62-3c4f-4bb9-805c-e61ef28f869d STEP: Creating a pod to test consume secrets May 11 19:42:40.728: INFO: Waiting up to 5m0s for pod "pod-secrets-217224fe-feb2-41be-8ec6-8f389506966d" in namespace "secrets-5090" to be "Succeeded or Failed" May 11 19:42:40.931: INFO: Pod "pod-secrets-217224fe-feb2-41be-8ec6-8f389506966d": Phase="Pending", Reason="", readiness=false. Elapsed: 202.655317ms May 11 19:42:43.051: INFO: Pod "pod-secrets-217224fe-feb2-41be-8ec6-8f389506966d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323124279s May 11 19:42:45.283: INFO: Pod "pod-secrets-217224fe-feb2-41be-8ec6-8f389506966d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.554884859s STEP: Saw pod success May 11 19:42:45.283: INFO: Pod "pod-secrets-217224fe-feb2-41be-8ec6-8f389506966d" satisfied condition "Succeeded or Failed" May 11 19:42:45.290: INFO: Trying to get logs from node latest-worker pod pod-secrets-217224fe-feb2-41be-8ec6-8f389506966d container secret-volume-test: STEP: delete the pod May 11 19:42:45.793: INFO: Waiting for pod pod-secrets-217224fe-feb2-41be-8ec6-8f389506966d to disappear May 11 19:42:45.901: INFO: Pod pod-secrets-217224fe-feb2-41be-8ec6-8f389506966d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:42:45.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5090" for this suite. STEP: Destroying namespace "secret-namespace-4770" for this suite. • [SLOW TEST:6.911 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:42:46.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 11 19:42:46.678: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5154 /api/v1/namespaces/watch-5154/configmaps/e2e-watch-test-resource-version a16c913f-c46d-492d-95d4-0a2ef5e73dd0 3542635 0 2020-05-11 19:42:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-11 19:42:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 19:42:46.678: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5154 /api/v1/namespaces/watch-5154/configmaps/e2e-watch-test-resource-version a16c913f-c46d-492d-95d4-0a2ef5e73dd0 3542636 0 2020-05-11 19:42:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-11 19:42:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:42:46.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5154" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":136,"skipped":2166,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:42:46.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 11 19:42:47.148: INFO: Waiting up to 5m0s for pod "client-containers-31f9e228-95e5-4dd7-aa11-723651682b86" in namespace "containers-7218" to be "Succeeded or Failed" May 11 19:42:47.230: INFO: Pod "client-containers-31f9e228-95e5-4dd7-aa11-723651682b86": Phase="Pending", Reason="", readiness=false. Elapsed: 82.228699ms May 11 19:42:49.273: INFO: Pod "client-containers-31f9e228-95e5-4dd7-aa11-723651682b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124399211s May 11 19:42:51.500: INFO: Pod "client-containers-31f9e228-95e5-4dd7-aa11-723651682b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.351847006s STEP: Saw pod success May 11 19:42:51.500: INFO: Pod "client-containers-31f9e228-95e5-4dd7-aa11-723651682b86" satisfied condition "Succeeded or Failed" May 11 19:42:51.662: INFO: Trying to get logs from node latest-worker pod client-containers-31f9e228-95e5-4dd7-aa11-723651682b86 container test-container: STEP: delete the pod May 11 19:42:51.723: INFO: Waiting for pod client-containers-31f9e228-95e5-4dd7-aa11-723651682b86 to disappear May 11 19:42:51.738: INFO: Pod client-containers-31f9e228-95e5-4dd7-aa11-723651682b86 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:42:51.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7218" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2170,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:42:51.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7948 STEP: creating service affinity-nodeport-transition in namespace services-7948 STEP: creating replication controller affinity-nodeport-transition in namespace services-7948 I0511 19:42:51.974527 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-7948, replica count: 3 I0511 19:42:55.024892 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:42:58.025430 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:43:01.025641 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 19:43:01.034: INFO: Creating new exec pod May 11 19:43:06.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7948 execpod-affinityhl72v -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 11 19:43:10.294: INFO: stderr: "I0511 19:43:10.202326 2227 log.go:172] (0xc000d42000) (0xc000684c80) Create stream\nI0511 19:43:10.202379 2227 log.go:172] (0xc000d42000) (0xc000684c80) Stream added, broadcasting: 1\nI0511 19:43:10.204406 2227 log.go:172] (0xc000d42000) Reply frame received for 1\nI0511 19:43:10.204460 2227 log.go:172] (0xc000d42000) (0xc000632500) Create stream\nI0511 19:43:10.204480 2227 log.go:172] (0xc000d42000) (0xc000632500) Stream added, broadcasting: 3\nI0511 19:43:10.205813 2227 log.go:172] (0xc000d42000) Reply frame received for 3\nI0511 19:43:10.205859 2227 log.go:172] (0xc000d42000) (0xc000632dc0) Create stream\nI0511 19:43:10.205876 2227 log.go:172] (0xc000d42000) (0xc000632dc0) Stream added, broadcasting: 5\nI0511 19:43:10.206859 2227 log.go:172] (0xc000d42000) Reply frame received for 5\nI0511 19:43:10.288433 2227 log.go:172] (0xc000d42000) Data frame received for 5\nI0511 19:43:10.288470 2227 log.go:172] (0xc000632dc0) (5) Data frame handling\nI0511 19:43:10.288494 2227 log.go:172] (0xc000632dc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0511 19:43:10.288718 2227 log.go:172] (0xc000d42000) Data frame received for 5\nI0511 19:43:10.288736 2227 log.go:172] (0xc000632dc0) (5) Data frame handling\nI0511 19:43:10.288748 2227 log.go:172] (0xc000632dc0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0511 19:43:10.288946 2227 log.go:172] (0xc000d42000) Data frame received for 5\nI0511 19:43:10.288962 2227 log.go:172] (0xc000632dc0) (5) Data frame handling\nI0511 19:43:10.289079 2227 log.go:172] (0xc000d42000) Data frame received for 3\nI0511 19:43:10.289095 2227 log.go:172] (0xc000632500) (3) Data frame handling\nI0511 19:43:10.290793 2227 log.go:172] (0xc000d42000) Data frame received for 1\nI0511 19:43:10.290806 2227 log.go:172] (0xc000684c80) (1) Data frame handling\nI0511 19:43:10.290813 2227 log.go:172] (0xc000684c80) (1) Data frame sent\nI0511 19:43:10.290934 2227 log.go:172] (0xc000d42000) (0xc000684c80) Stream removed, broadcasting: 1\nI0511 19:43:10.291106 2227 log.go:172] (0xc000d42000) Go away received\nI0511 19:43:10.291181 2227 log.go:172] (0xc000d42000) (0xc000684c80) Stream removed, broadcasting: 1\nI0511 19:43:10.291197 2227 log.go:172] (0xc000d42000) (0xc000632500) Stream removed, broadcasting: 3\nI0511 19:43:10.291203 2227 log.go:172] (0xc000d42000) (0xc000632dc0) Stream removed, broadcasting: 5\n" May 11 19:43:10.295: INFO: stdout: "" May 11 19:43:10.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7948 execpod-affinityhl72v -- /bin/sh -x -c nc -zv -t -w 2 10.106.116.18 80' May 11 19:43:10.605: INFO: stderr: "I0511 19:43:10.540499 2260 log.go:172] (0xc000b21ad0) (0xc0008466e0) Create stream\nI0511 19:43:10.540557 2260 log.go:172] (0xc000b21ad0) (0xc0008466e0) Stream added, broadcasting: 1\nI0511 19:43:10.543053 2260 log.go:172] (0xc000b21ad0) Reply frame received for 1\nI0511 19:43:10.543107 2260 log.go:172] (0xc000b21ad0) (0xc00084c000) Create stream\nI0511 19:43:10.543126 2260 log.go:172] (0xc000b21ad0) (0xc00084c000) Stream added, broadcasting: 3\nI0511 19:43:10.544054 2260 log.go:172] (0xc000b21ad0) Reply frame received for 3\nI0511 19:43:10.544083 2260 log.go:172] (0xc000b21ad0) (0xc000847040) Create stream\nI0511 19:43:10.544092 2260 log.go:172] (0xc000b21ad0) (0xc000847040) Stream added, broadcasting: 5\nI0511 19:43:10.544984 2260 log.go:172] (0xc000b21ad0) Reply frame received for 5\nI0511 19:43:10.598322 2260 log.go:172] (0xc000b21ad0) Data frame received for 5\nI0511 19:43:10.598352 2260 log.go:172] (0xc000847040) (5) Data frame handling\nI0511 19:43:10.598371 2260 log.go:172] (0xc000847040) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.116.18 80\nI0511 19:43:10.598677 2260 log.go:172] (0xc000b21ad0) Data frame received for 5\nI0511 19:43:10.598698 2260 log.go:172] (0xc000847040) (5) Data frame handling\nI0511 19:43:10.598709 2260 log.go:172] (0xc000847040) (5) Data frame sent\nConnection to 10.106.116.18 80 port [tcp/http] succeeded!\nI0511 19:43:10.598952 2260 log.go:172] (0xc000b21ad0) Data frame received for 5\nI0511 19:43:10.598982 2260 log.go:172] (0xc000847040) (5) Data frame handling\nI0511 19:43:10.599037 2260 log.go:172] (0xc000b21ad0) Data frame received for 3\nI0511 19:43:10.599048 2260 log.go:172] (0xc00084c000) (3) Data frame handling\nI0511 19:43:10.600452 2260 log.go:172] (0xc000b21ad0) Data frame received for 1\nI0511 19:43:10.600472 2260 log.go:172] (0xc0008466e0) (1) Data frame handling\nI0511 19:43:10.600488 2260 log.go:172] (0xc0008466e0) (1) Data frame sent\nI0511 19:43:10.600795 2260 log.go:172] (0xc000b21ad0) (0xc0008466e0) Stream removed, broadcasting: 1\nI0511 19:43:10.600817 2260 log.go:172] (0xc000b21ad0) Go away received\nI0511 19:43:10.601585 2260 log.go:172] (0xc000b21ad0) (0xc0008466e0) Stream removed, broadcasting: 1\nI0511 19:43:10.601628 2260 log.go:172] (0xc000b21ad0) (0xc00084c000) Stream removed, broadcasting: 3\nI0511 19:43:10.601648 2260 log.go:172] (0xc000b21ad0) (0xc000847040) Stream removed, broadcasting: 5\n" May 11 19:43:10.605: INFO: stdout: "" May 11 19:43:10.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7948 execpod-affinityhl72v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31646' May 11 19:43:10.853: INFO: stderr: "I0511 19:43:10.780503 2279 log.go:172] (0xc0009451e0) (0xc0009d23c0) Create stream\nI0511 19:43:10.780662 2279 log.go:172] (0xc0009451e0) (0xc0009d23c0) Stream added, broadcasting: 1\nI0511 19:43:10.784426 2279 log.go:172] (0xc0009451e0) Reply frame received for 1\nI0511 19:43:10.784465 2279 log.go:172] (0xc0009451e0) (0xc00055c5a0) Create stream\nI0511 19:43:10.784477 2279 log.go:172] (0xc0009451e0) (0xc00055c5a0) Stream added, broadcasting: 3\nI0511 19:43:10.785289 2279 log.go:172] (0xc0009451e0) Reply frame received for 3\nI0511 19:43:10.785317 2279 log.go:172] (0xc0009451e0) (0xc0004e4dc0) Create stream\nI0511 19:43:10.785331 2279 log.go:172] (0xc0009451e0) (0xc0004e4dc0) Stream added, broadcasting: 5\nI0511 19:43:10.785973 2279 log.go:172] (0xc0009451e0) Reply frame received for 5\nI0511 19:43:10.846813 2279 log.go:172] (0xc0009451e0) Data frame received for 5\nI0511 19:43:10.846915 2279 log.go:172] (0xc0004e4dc0) (5) Data frame handling\nI0511 19:43:10.846970 2279 log.go:172] (0xc0004e4dc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31646\nConnection to 172.17.0.13 31646 port [tcp/31646] succeeded!\nI0511 19:43:10.846992 2279 log.go:172] (0xc0009451e0) Data frame received for 5\nI0511 19:43:10.847004 2279 log.go:172] (0xc0004e4dc0) (5) Data frame handling\nI0511 19:43:10.847029 2279 log.go:172] (0xc0009451e0) Data frame received for 3\nI0511 19:43:10.847048 2279 log.go:172] (0xc00055c5a0) (3) Data frame handling\nI0511 19:43:10.848075 2279 log.go:172] (0xc0009451e0) Data frame received for 1\nI0511 19:43:10.848102 2279 log.go:172] (0xc0009d23c0) (1) Data frame handling\nI0511 19:43:10.848129 2279 log.go:172] (0xc0009d23c0) (1) Data frame sent\nI0511 19:43:10.848155 2279 log.go:172] (0xc0009451e0) (0xc0009d23c0) Stream removed, broadcasting: 1\nI0511 19:43:10.848180 2279 log.go:172] (0xc0009451e0) Go away received\nI0511 19:43:10.848517 2279 log.go:172] (0xc0009451e0) (0xc0009d23c0) Stream removed, broadcasting: 1\nI0511 19:43:10.848532 2279 log.go:172] (0xc0009451e0) (0xc00055c5a0) Stream removed, broadcasting: 3\nI0511 19:43:10.848540 2279 log.go:172] (0xc0009451e0) (0xc0004e4dc0) Stream removed, broadcasting: 5\n" May 11 19:43:10.853: INFO: stdout: "" May 11 19:43:10.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7948 execpod-affinityhl72v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31646' May 11 19:43:11.037: INFO: stderr: "I0511 19:43:10.972694 2300 log.go:172] (0xc000a99340) (0xc000bb6320) Create stream\nI0511 19:43:10.972735 2300 log.go:172] (0xc000a99340) (0xc000bb6320) Stream added, broadcasting: 1\nI0511 19:43:10.976737 2300 log.go:172] (0xc000a99340) Reply frame received for 1\nI0511 19:43:10.976765 2300 log.go:172] (0xc000a99340) (0xc000484d20) Create stream\nI0511 19:43:10.976773 2300 log.go:172] (0xc000a99340) (0xc000484d20) Stream added, broadcasting: 3\nI0511 19:43:10.977977 2300 log.go:172] (0xc000a99340) Reply frame received for 3\nI0511 19:43:10.978018 2300 log.go:172] (0xc000a99340) (0xc000482460) Create stream\nI0511 19:43:10.978036 2300 log.go:172] (0xc000a99340) (0xc000482460) Stream added, broadcasting: 5\nI0511 19:43:10.978999 2300 log.go:172] (0xc000a99340) Reply frame received for 5\nI0511 19:43:11.032124 2300 log.go:172] (0xc000a99340) Data frame received for 3\nI0511 19:43:11.032298 2300 log.go:172] (0xc000484d20) (3) Data frame handling\nI0511 19:43:11.032335 2300 log.go:172] (0xc000a99340) Data frame received for 5\nI0511 19:43:11.032346 2300 log.go:172] (0xc000482460) (5) Data frame handling\nI0511 19:43:11.032358 2300 log.go:172] (0xc000482460) (5) Data frame sent\nI0511 19:43:11.032366 2300 log.go:172] (0xc000a99340) Data frame received for 5\nI0511 19:43:11.032372 2300 log.go:172] (0xc000482460) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31646\nConnection to 172.17.0.12 31646 port [tcp/31646] succeeded!\nI0511 19:43:11.033824 2300 log.go:172] (0xc000a99340) Data frame received for 1\nI0511 19:43:11.033849 2300 log.go:172] (0xc000bb6320) (1) Data frame handling\nI0511 19:43:11.033864 2300 log.go:172] (0xc000bb6320) (1) Data frame sent\nI0511 19:43:11.033884 2300 log.go:172] (0xc000a99340) (0xc000bb6320) Stream removed, broadcasting: 1\nI0511 19:43:11.033912 2300 log.go:172] (0xc000a99340) Go away received\nI0511 19:43:11.034218 2300 log.go:172] (0xc000a99340) (0xc000bb6320) Stream removed, broadcasting: 1\nI0511 19:43:11.034238 2300 log.go:172] (0xc000a99340) (0xc000484d20) Stream removed, broadcasting: 3\nI0511 19:43:11.034247 2300 log.go:172] (0xc000a99340) (0xc000482460) Stream removed, broadcasting: 5\n" May 11 19:43:11.037: INFO: stdout: "" May 11 19:43:11.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7948 execpod-affinityhl72v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31646/ ; done' May 11 19:43:11.379: INFO: stderr: "I0511 19:43:11.201346 2321 log.go:172] (0xc0005bcfd0) (0xc000aae780) Create stream\nI0511 19:43:11.201430 2321 log.go:172] (0xc0005bcfd0) (0xc000aae780) Stream added, broadcasting: 1\nI0511 19:43:11.205845 2321 log.go:172] (0xc0005bcfd0) Reply frame received for 1\nI0511 19:43:11.205888 2321 log.go:172] (0xc0005bcfd0) (0xc0004ecc80) Create stream\nI0511 19:43:11.205905 2321 log.go:172] (0xc0005bcfd0) (0xc0004ecc80) Stream added, broadcasting: 3\nI0511 19:43:11.206919 2321 log.go:172] (0xc0005bcfd0) Reply frame received for 3\nI0511 19:43:11.206990 2321 log.go:172] (0xc0005bcfd0) (0xc00038e000) Create stream\nI0511 19:43:11.207022 2321 log.go:172] (0xc0005bcfd0) (0xc00038e000) Stream added, broadcasting: 5\nI0511 19:43:11.208082 2321 log.go:172] (0xc0005bcfd0) Reply frame received for 5\nI0511 19:43:11.278046 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.278109 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.278137 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.278169 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.278189 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.278230 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.281088 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.281270 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.281317 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.281426 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.281439 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.281445 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.281459 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.281467 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.281474 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.290020 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.290035 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.290048 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.290936 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.290953 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.290971 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.290978 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.290991 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.290997 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.295423 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.295450 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.295480 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.295955 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.295968 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.295974 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.296147 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.296175 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.296202 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.303157 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.303190 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.303215 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.303812 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.303829 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.303838 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.303849 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.303857 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.303862 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.309714 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.309734 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.309747 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.309990 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.310019 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.310044 2321 log.go:172] (0xc00038e000) (5) Data frame sent\nI0511 19:43:11.310059 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.310067 2321 log.go:172] (0xc00038e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.310098 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.310133 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.310147 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.310166 2321 log.go:172] (0xc00038e000) (5) Data frame sent\nI0511 19:43:11.319352 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.319378 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.319454 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.319883 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.319903 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.319916 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.319932 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.319941 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.319948 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.324347 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.324366 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.324381 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.324748 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.324764 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.324780 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.324787 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.324797 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.324803 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.331840 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.331855 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.331871 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.332829 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.332867 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.332886 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.332905 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.332916 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.332928 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.337349 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.337371 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.337379 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.337839 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.337862 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.337870 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.337883 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.337893 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.337900 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.342257 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.342275 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.342299 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.342899 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.342919 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.342927 2321 log.go:172] (0xc00038e000) (5) Data frame sent\nI0511 19:43:11.342945 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/I0511 19:43:11.342952 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.343015 2321 log.go:172] (0xc00038e000) (5) Data frame sent\nI0511 19:43:11.343031 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.343037 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.343044 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\n\nI0511 19:43:11.346757 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.346784 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.346806 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.347471 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.347485 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.347502 2321 log.go:172] (0xc00038e000) (5) Data frame sent\nI0511 19:43:11.347518 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.347542 2321 log.go:172] (0xc00038e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.347552 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.347590 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.347625 2321 log.go:172] (0xc00038e000) (5) Data frame sent\nI0511 19:43:11.347652 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.351991 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.352009 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.352024 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.352829 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.352848 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.352864 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.352903 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.352932 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.352980 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.358654 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.358670 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.358679 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.359070 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.359092 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.359100 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.359110 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.359114 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.359119 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.363624 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.363643 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.363665 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.364048 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.364060 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.364070 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.364197 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.364232 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.364250 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.368679 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.368702 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.368724 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.369004 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.369022 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.369030 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.369049 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.369062 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.369071 2321 log.go:172] (0xc00038e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.372084 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.372098 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.372105 2321 log.go:172] (0xc0004ecc80) (3) Data frame sent\nI0511 19:43:11.372581 2321 log.go:172] (0xc0005bcfd0) Data frame received for 3\nI0511 19:43:11.372636 2321 log.go:172] (0xc0004ecc80) (3) Data frame handling\nI0511 19:43:11.372830 2321 log.go:172] (0xc0005bcfd0) Data frame received for 5\nI0511 19:43:11.372864 2321 log.go:172] (0xc00038e000) (5) Data frame handling\nI0511 19:43:11.375001 2321 log.go:172] (0xc0005bcfd0) Data frame received for 1\nI0511 19:43:11.375036 2321 log.go:172] (0xc000aae780) (1) Data frame handling\nI0511 19:43:11.375056 2321 log.go:172] (0xc000aae780) (1) Data frame sent\nI0511 19:43:11.375072 2321 log.go:172] (0xc0005bcfd0) (0xc000aae780) Stream removed, broadcasting: 1\nI0511 19:43:11.375090 2321 log.go:172] (0xc0005bcfd0) Go away received\nI0511 19:43:11.375509 2321 log.go:172] (0xc0005bcfd0) (0xc000aae780) Stream removed, broadcasting: 1\nI0511 19:43:11.375526 2321 log.go:172] (0xc0005bcfd0) (0xc0004ecc80) Stream removed, broadcasting: 3\nI0511 19:43:11.375535 2321 log.go:172] (0xc0005bcfd0) (0xc00038e000) Stream removed, broadcasting: 5\n" May 11 19:43:11.380: INFO: stdout: "\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-nzppc\naffinity-nodeport-transition-nzppc\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-nzppc\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-dq8vc\naffinity-nodeport-transition-dq8vc" May 11 19:43:11.380: INFO: Received response from host: May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-nzppc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-nzppc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-nzppc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.380: INFO: Received response from host: affinity-nodeport-transition-dq8vc May 11 19:43:11.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7948 execpod-affinityhl72v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31646/ ; done' May 11 19:43:11.721: INFO: stderr: "I0511 19:43:11.567627 2343 log.go:172] (0xc000b8cf20) (0xc00065d680) Create stream\nI0511 19:43:11.567699 2343 log.go:172] (0xc000b8cf20) (0xc00065d680) Stream added, broadcasting: 1\nI0511 19:43:11.572522 2343 log.go:172] (0xc000b8cf20) Reply frame received for 1\nI0511 19:43:11.572577 2343 log.go:172] (0xc000b8cf20) (0xc000520be0) Create stream\nI0511 19:43:11.572588 2343 log.go:172] (0xc000b8cf20) (0xc000520be0) Stream added, broadcasting: 3\nI0511 19:43:11.573812 2343 log.go:172] (0xc000b8cf20) Reply frame received for 3\nI0511 19:43:11.573843 2343 log.go:172] (0xc000b8cf20) (0xc00050c3c0) Create stream\nI0511 19:43:11.573854 2343 log.go:172] (0xc000b8cf20) (0xc00050c3c0) Stream added, broadcasting: 5\nI0511 19:43:11.575002 2343 log.go:172] (0xc000b8cf20) Reply frame received for 5\nI0511 19:43:11.630147 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.630175 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.630199 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.630240 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.630264 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.630283 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.636369 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.636383 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.636389 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.637510 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.637554 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.637576 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.637600 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.637618 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.637651 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.643309 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.643325 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.643338 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.643510 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.643526 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.643535 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0511 19:43:11.643661 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.643672 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.643678 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.643741 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.643754 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.643765 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n 2 http://172.17.0.13:31646/\nI0511 19:43:11.647074 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.647101 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.647121 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.647319 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.647331 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.647337 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.647401 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.647412 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.647421 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.650613 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.650627 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.650637 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.650932 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.650952 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.650959 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.650968 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.650975 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.650987 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\nI0511 19:43:11.650999 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.651007 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.651026 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\nI0511 19:43:11.657887 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.657907 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.657920 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.658517 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.658534 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.658558 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.658569 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.658582 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.658591 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.662810 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.662834 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.662857 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.663107 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.663120 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.663128 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -sI0511 19:43:11.663135 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.663180 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.663200 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.663221 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.663248 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.663296 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.667078 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.667096 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.667112 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.667501 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.667523 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0511 19:43:11.667541 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.667568 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.667582 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.667606 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\nI0511 19:43:11.667621 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.667634 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.667651 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n http://172.17.0.13:31646/\nI0511 19:43:11.672134 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.672155 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.672167 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.672565 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.672591 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.672603 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.672620 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.672629 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.672639 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.677077 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.677097 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.677283 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.678485 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.678514 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.678526 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.678542 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.678551 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.678563 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.681806 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.681838 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.681884 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.682230 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.682278 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.682307 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.682325 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.682354 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.682366 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.685662 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.685683 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.685704 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.685856 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.685886 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.685917 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.685934 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.685957 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.685978 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.689326 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.689371 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.689399 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.690215 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.690247 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.690262 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.690285 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.690322 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.690347 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.694662 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.694695 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.694724 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.695300 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.695315 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.695325 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.695424 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.695455 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.695476 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.703152 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.703218 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.703278 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.703989 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.704110 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.704137 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.704162 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.704174 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.704196 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.708217 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.708238 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.708256 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.708864 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.708909 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.708953 2343 log.go:172] (0xc00050c3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31646/\nI0511 19:43:11.708994 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.709016 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.709032 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.712967 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.712981 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.712988 2343 log.go:172] (0xc000520be0) (3) Data frame sent\nI0511 19:43:11.713832 2343 log.go:172] (0xc000b8cf20) Data frame received for 5\nI0511 19:43:11.713853 2343 log.go:172] (0xc00050c3c0) (5) Data frame handling\nI0511 19:43:11.714061 2343 log.go:172] (0xc000b8cf20) Data frame received for 3\nI0511 19:43:11.714085 2343 log.go:172] (0xc000520be0) (3) Data frame handling\nI0511 19:43:11.715398 2343 log.go:172] (0xc000b8cf20) Data frame received for 1\nI0511 19:43:11.715431 2343 log.go:172] (0xc00065d680) (1) Data frame handling\nI0511 19:43:11.715457 2343 log.go:172] (0xc00065d680) (1) Data frame sent\nI0511 19:43:11.715571 2343 log.go:172] (0xc000b8cf20) (0xc00065d680) Stream removed, broadcasting: 1\nI0511 19:43:11.715616 2343 log.go:172] (0xc000b8cf20) Go away received\nI0511 19:43:11.716172 2343 log.go:172] (0xc000b8cf20) (0xc00065d680) Stream removed, broadcasting: 1\nI0511 19:43:11.716197 2343 log.go:172] (0xc000b8cf20) (0xc000520be0) Stream removed, broadcasting: 3\nI0511 19:43:11.716209 2343 log.go:172] (0xc000b8cf20) (0xc00050c3c0) Stream removed, broadcasting: 5\n" May 11 19:43:11.722: INFO: stdout: "\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4\naffinity-nodeport-transition-kppb4" May 11 19:43:11.722: INFO: Received response from host: May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Received response from host: affinity-nodeport-transition-kppb4 May 11 19:43:11.722: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-7948, will wait for the garbage collector to delete the pods May 11 19:43:11.834: INFO: Deleting ReplicationController affinity-nodeport-transition took: 7.060474ms May 11 19:43:12.435: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.247352ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:43:25.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7948" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:33.707 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":138,"skipped":2175,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:43:25.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:43:25.557: INFO: Creating deployment "test-recreate-deployment" May 11 19:43:25.560: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 11 19:43:25.596: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 11 19:43:27.724: INFO: Waiting deployment "test-recreate-deployment" to complete May 11 19:43:27.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823005, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823005, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823005, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823005, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:43:29.730: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 11 19:43:29.736: INFO: Updating deployment test-recreate-deployment May 11 19:43:29.736: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 19:43:30.590: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9273 /apis/apps/v1/namespaces/deployment-9273/deployments/test-recreate-deployment 2c40be96-4905-4f2b-b3c3-f8502b128266 3543087 2 2020-05-11 19:43:25 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-11 19:43:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 19:43:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00483fc98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 19:43:30 +0000 UTC,LastTransitionTime:2020-05-11 19:43:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-11 19:43:30 +0000 UTC,LastTransitionTime:2020-05-11 19:43:25 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 11 19:43:30.596: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-9273 /apis/apps/v1/namespaces/deployment-9273/replicasets/test-recreate-deployment-d5667d9c7 47d215ae-6cd5-471d-b074-7e3d3802c740 3543083 1 2020-05-11 19:43:29 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 2c40be96-4905-4f2b-b3c3-f8502b128266 0xc00005b840 0xc00005b841}] [] [{kube-controller-manager Update apps/v1 2020-05-11 19:43:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c40be96-4905-4f2b-b3c3-f8502b128266\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00005b918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 19:43:30.596: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 11 19:43:30.596: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-9273 /apis/apps/v1/namespaces/deployment-9273/replicasets/test-recreate-deployment-6d65b9f6d8 a8d09749-5e3d-41fb-a371-0f0d5e378217 3543075 2 2020-05-11 19:43:25 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 2c40be96-4905-4f2b-b3c3-f8502b128266 0xc00005b4a7 0xc00005b4a8}] [] [{kube-controller-manager Update apps/v1 2020-05-11 19:43:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c40be96-4905-4f2b-b3c3-f8502b128266\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00005b5d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 19:43:30.628: INFO: Pod "test-recreate-deployment-d5667d9c7-84k66" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-84k66 test-recreate-deployment-d5667d9c7- deployment-9273 /api/v1/namespaces/deployment-9273/pods/test-recreate-deployment-d5667d9c7-84k66 bfc30f73-b79f-43c2-a775-253173f1fd3d 3543088 0 2020-05-11 19:43:29 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 47d215ae-6cd5-471d-b074-7e3d3802c740 0xc000757150 0xc000757151}] [] [{kube-controller-manager Update v1 2020-05-11 19:43:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47d215ae-6cd5-471d-b074-7e3d3802c740\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:43:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qd5sj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qd5sj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qd5sj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:43:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:43:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:43:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:43:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 19:43:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:43:30.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9273" for this suite. • [SLOW TEST:5.367 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":139,"skipped":2186,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:43:30.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 19:43:31.445: INFO: Waiting up to 5m0s for pod "pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981" in namespace "emptydir-8176" to be "Succeeded or Failed" May 11 19:43:31.579: INFO: Pod "pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981": Phase="Pending", Reason="", readiness=false. Elapsed: 134.607698ms May 11 19:43:33.952: INFO: Pod "pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981": Phase="Pending", Reason="", readiness=false. Elapsed: 2.507340628s May 11 19:43:35.991: INFO: Pod "pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981": Phase="Pending", Reason="", readiness=false. Elapsed: 4.546425787s May 11 19:43:38.211: INFO: Pod "pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.766049367s STEP: Saw pod success May 11 19:43:38.211: INFO: Pod "pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981" satisfied condition "Succeeded or Failed" May 11 19:43:38.276: INFO: Trying to get logs from node latest-worker pod pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981 container test-container: STEP: delete the pod May 11 19:43:38.366: INFO: Waiting for pod pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981 to disappear May 11 19:43:38.404: INFO: Pod pod-b0dcd129-4fdc-423a-8b5f-9e3b1f87e981 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:43:38.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8176" for this suite. • [SLOW TEST:7.618 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":140,"skipped":2187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:43:38.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7900 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7900 STEP: creating replication controller externalsvc in namespace services-7900 I0511 19:43:38.889375 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7900, replica count: 2 I0511 19:43:41.939656 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:43:44.939857 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 11 19:43:45.019: INFO: Creating new exec pod May 11 19:43:49.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7900 execpodgphcw -- /bin/sh -x -c nslookup nodeport-service' May 11 19:43:49.291: INFO: stderr: "I0511 19:43:49.208123 2364 log.go:172] (0xc000c1aa50) (0xc0005685a0) Create stream\nI0511 19:43:49.208169 2364 log.go:172] (0xc000c1aa50) (0xc0005685a0) Stream added, broadcasting: 1\nI0511 19:43:49.210464 2364 log.go:172] (0xc000c1aa50) Reply frame received for 1\nI0511 19:43:49.210503 2364 log.go:172] (0xc000c1aa50) (0xc000aecaa0) Create stream\nI0511 19:43:49.210519 2364 log.go:172] (0xc000c1aa50) (0xc000aecaa0) Stream added, broadcasting: 3\nI0511 19:43:49.211334 2364 log.go:172] (0xc000c1aa50) Reply frame received for 3\nI0511 19:43:49.211371 2364 log.go:172] (0xc000c1aa50) (0xc0004f9220) Create stream\nI0511 19:43:49.211393 2364 log.go:172] (0xc000c1aa50) (0xc0004f9220) Stream added, broadcasting: 5\nI0511 19:43:49.212841 2364 log.go:172] (0xc000c1aa50) Reply frame received for 5\nI0511 19:43:49.277864 2364 log.go:172] (0xc000c1aa50) Data frame received for 5\nI0511 19:43:49.277889 2364 log.go:172] (0xc0004f9220) (5) Data frame handling\nI0511 19:43:49.277907 2364 log.go:172] (0xc0004f9220) (5) Data frame sent\n+ nslookup nodeport-service\nI0511 19:43:49.284398 2364 log.go:172] (0xc000c1aa50) Data frame received for 3\nI0511 19:43:49.284408 2364 log.go:172] (0xc000aecaa0) (3) Data frame handling\nI0511 19:43:49.284414 2364 log.go:172] (0xc000aecaa0) (3) Data frame sent\nI0511 19:43:49.285076 2364 log.go:172] (0xc000c1aa50) Data frame received for 3\nI0511 19:43:49.285090 2364 log.go:172] (0xc000aecaa0) (3) Data frame handling\nI0511 19:43:49.285186 2364 log.go:172] (0xc000aecaa0) (3) Data frame sent\nI0511 19:43:49.285643 2364 log.go:172] (0xc000c1aa50) Data frame received for 5\nI0511 19:43:49.285690 2364 log.go:172] (0xc0004f9220) (5) Data frame handling\nI0511 19:43:49.285718 2364 log.go:172] (0xc000c1aa50) Data frame received for 3\nI0511 19:43:49.285736 2364 log.go:172] (0xc000aecaa0) (3) Data frame handling\nI0511 19:43:49.286747 2364 log.go:172] (0xc000c1aa50) Data frame received for 1\nI0511 19:43:49.286776 2364 log.go:172] (0xc0005685a0) (1) Data frame handling\nI0511 19:43:49.286800 2364 log.go:172] (0xc0005685a0) (1) Data frame sent\nI0511 19:43:49.286824 2364 log.go:172] (0xc000c1aa50) (0xc0005685a0) Stream removed, broadcasting: 1\nI0511 19:43:49.286851 2364 log.go:172] (0xc000c1aa50) Go away received\nI0511 19:43:49.287287 2364 log.go:172] (0xc000c1aa50) (0xc0005685a0) Stream removed, broadcasting: 1\nI0511 19:43:49.287308 2364 log.go:172] (0xc000c1aa50) (0xc000aecaa0) Stream removed, broadcasting: 3\nI0511 19:43:49.287321 2364 log.go:172] (0xc000c1aa50) (0xc0004f9220) Stream removed, broadcasting: 5\n" May 11 19:43:49.291: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7900.svc.cluster.local\tcanonical name = externalsvc.services-7900.svc.cluster.local.\nName:\texternalsvc.services-7900.svc.cluster.local\nAddress: 10.101.156.160\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7900, will wait for the garbage collector to delete the pods May 11 19:43:49.348: INFO: Deleting ReplicationController externalsvc took: 4.185788ms May 11 19:43:49.548: INFO: Terminating ReplicationController externalsvc pods took: 200.162946ms May 11 19:44:05.811: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:44:06.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7900" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:28.637 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":141,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:44:07.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:44:11.697: INFO: Waiting up to 5m0s for pod "client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b" in namespace "pods-6621" to be "Succeeded or Failed" May 11 19:44:11.794: INFO: Pod "client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b": Phase="Pending", Reason="", readiness=false. Elapsed: 96.583305ms May 11 19:44:14.004: INFO: Pod "client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306713619s May 11 19:44:16.008: INFO: Pod "client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310483449s May 11 19:44:18.087: INFO: Pod "client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.389699656s STEP: Saw pod success May 11 19:44:18.087: INFO: Pod "client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b" satisfied condition "Succeeded or Failed" May 11 19:44:18.159: INFO: Trying to get logs from node latest-worker2 pod client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b container env3cont: STEP: delete the pod May 11 19:44:18.411: INFO: Waiting for pod client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b to disappear May 11 19:44:18.523: INFO: Pod client-envvars-9151f154-fbe6-40e5-9fad-e62e5c61362b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:44:18.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6621" for this suite. • [SLOW TEST:11.454 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2274,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:44:18.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-34256688-2a89-4ebd-b9f7-8b74e6d3b52d STEP: Creating a pod to test consume configMaps May 11 19:44:18.851: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc79fe37-5d1c-4bdb-a720-be337c0a7bc3" in namespace "projected-1851" to be "Succeeded or Failed" May 11 19:44:18.902: INFO: Pod "pod-projected-configmaps-cc79fe37-5d1c-4bdb-a720-be337c0a7bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 51.126455ms May 11 19:44:20.956: INFO: Pod "pod-projected-configmaps-cc79fe37-5d1c-4bdb-a720-be337c0a7bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104843993s May 11 19:44:22.959: INFO: Pod "pod-projected-configmaps-cc79fe37-5d1c-4bdb-a720-be337c0a7bc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108277361s STEP: Saw pod success May 11 19:44:22.959: INFO: Pod "pod-projected-configmaps-cc79fe37-5d1c-4bdb-a720-be337c0a7bc3" satisfied condition "Succeeded or Failed" May 11 19:44:22.961: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-cc79fe37-5d1c-4bdb-a720-be337c0a7bc3 container projected-configmap-volume-test: STEP: delete the pod May 11 19:44:23.143: INFO: Waiting for pod pod-projected-configmaps-cc79fe37-5d1c-4bdb-a720-be337c0a7bc3 to disappear May 11 19:44:23.151: INFO: Pod pod-projected-configmaps-cc79fe37-5d1c-4bdb-a720-be337c0a7bc3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:44:23.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1851" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2277,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:44:23.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:44:29.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7117" for this suite. STEP: Destroying namespace "nsdeletetest-6610" for this suite. May 11 19:44:29.987: INFO: Namespace nsdeletetest-6610 was already deleted STEP: Destroying namespace "nsdeletetest-9998" for this suite. • [SLOW TEST:6.961 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":144,"skipped":2280,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:44:30.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-89d0b92b-226a-4697-8e58-de05e47e6c9a May 11 19:44:30.274: INFO: Pod name my-hostname-basic-89d0b92b-226a-4697-8e58-de05e47e6c9a: Found 0 pods out of 1 May 11 19:44:35.309: INFO: Pod name my-hostname-basic-89d0b92b-226a-4697-8e58-de05e47e6c9a: Found 1 pods out of 1 May 11 19:44:35.309: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-89d0b92b-226a-4697-8e58-de05e47e6c9a" are running May 11 19:44:35.351: INFO: Pod "my-hostname-basic-89d0b92b-226a-4697-8e58-de05e47e6c9a-mfdkb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 19:44:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 19:44:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 19:44:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 19:44:30 +0000 UTC Reason: Message:}]) May 11 19:44:35.351: INFO: Trying to dial the pod May 11 19:44:40.360: INFO: Controller my-hostname-basic-89d0b92b-226a-4697-8e58-de05e47e6c9a: Got expected result from replica 1 [my-hostname-basic-89d0b92b-226a-4697-8e58-de05e47e6c9a-mfdkb]: "my-hostname-basic-89d0b92b-226a-4697-8e58-de05e47e6c9a-mfdkb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:44:40.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2399" for this suite. • [SLOW TEST:10.248 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":145,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:44:40.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-1090/secret-test-db68aa7d-c94d-4175-a2b1-1d01015bf6ec STEP: Creating a pod to test consume secrets May 11 19:44:40.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309" in namespace "secrets-1090" to be "Succeeded or Failed" May 11 19:44:40.518: INFO: Pod "pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739622ms May 11 19:44:42.523: INFO: Pod "pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00843929s May 11 19:44:44.527: INFO: Pod "pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012569341s May 11 19:44:46.532: INFO: Pod "pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017625348s STEP: Saw pod success May 11 19:44:46.532: INFO: Pod "pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309" satisfied condition "Succeeded or Failed" May 11 19:44:46.538: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309 container env-test: STEP: delete the pod May 11 19:44:46.587: INFO: Waiting for pod pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309 to disappear May 11 19:44:46.591: INFO: Pod pod-configmaps-2ffbd8da-cad1-48a9-8a45-8dfa74962309 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:44:46.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1090" for this suite. • [SLOW TEST:6.232 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:44:46.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-253.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-253.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-253.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-253.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-253.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-253.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 19:44:56.864: INFO: DNS probes using dns-253/dns-test-fa6d4913-8328-46b9-ab45-4aecc574fa35 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:44:57.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-253" for this suite. • [SLOW TEST:10.439 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":147,"skipped":2333,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:44:57.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 19:44:57.610: INFO: Waiting up to 5m0s for pod "pod-65b40980-c38f-4375-ba01-879bb8bfd3bb" in namespace "emptydir-6290" to be "Succeeded or Failed" May 11 19:44:57.664: INFO: Pod "pod-65b40980-c38f-4375-ba01-879bb8bfd3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 53.601724ms May 11 19:44:59.722: INFO: Pod "pod-65b40980-c38f-4375-ba01-879bb8bfd3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111995834s May 11 19:45:01.726: INFO: Pod "pod-65b40980-c38f-4375-ba01-879bb8bfd3bb": Phase="Running", Reason="", readiness=true. Elapsed: 4.116058869s May 11 19:45:03.742: INFO: Pod "pod-65b40980-c38f-4375-ba01-879bb8bfd3bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132008813s STEP: Saw pod success May 11 19:45:03.742: INFO: Pod "pod-65b40980-c38f-4375-ba01-879bb8bfd3bb" satisfied condition "Succeeded or Failed" May 11 19:45:03.744: INFO: Trying to get logs from node latest-worker pod pod-65b40980-c38f-4375-ba01-879bb8bfd3bb container test-container: STEP: delete the pod May 11 19:45:03.772: INFO: Waiting for pod pod-65b40980-c38f-4375-ba01-879bb8bfd3bb to disappear May 11 19:45:03.802: INFO: Pod pod-65b40980-c38f-4375-ba01-879bb8bfd3bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:45:03.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6290" for this suite. • [SLOW TEST:6.812 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:45:03.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-24gp STEP: Creating a pod to test atomic-volume-subpath May 11 19:45:03.926: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-24gp" in namespace "subpath-2826" to be "Succeeded or Failed" May 11 19:45:03.980: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Pending", Reason="", readiness=false. Elapsed: 53.847985ms May 11 19:45:06.185: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258079172s May 11 19:45:08.188: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 4.261632892s May 11 19:45:10.192: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 6.265443125s May 11 19:45:12.196: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 8.269990009s May 11 19:45:14.201: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 10.274402479s May 11 19:45:16.205: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 12.278035499s May 11 19:45:18.209: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 14.282375701s May 11 19:45:20.212: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 16.28516931s May 11 19:45:22.216: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 18.289674018s May 11 19:45:24.221: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 20.294786447s May 11 19:45:26.225: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 22.298053452s May 11 19:45:28.304: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Running", Reason="", readiness=true. Elapsed: 24.377437664s May 11 19:45:30.308: INFO: Pod "pod-subpath-test-secret-24gp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.381761747s STEP: Saw pod success May 11 19:45:30.308: INFO: Pod "pod-subpath-test-secret-24gp" satisfied condition "Succeeded or Failed" May 11 19:45:30.312: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-24gp container test-container-subpath-secret-24gp: STEP: delete the pod May 11 19:45:30.413: INFO: Waiting for pod pod-subpath-test-secret-24gp to disappear May 11 19:45:30.424: INFO: Pod pod-subpath-test-secret-24gp no longer exists STEP: Deleting pod pod-subpath-test-secret-24gp May 11 19:45:30.424: INFO: Deleting pod "pod-subpath-test-secret-24gp" in namespace "subpath-2826" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:45:30.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2826" for this suite. • [SLOW TEST:26.581 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":149,"skipped":2363,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:45:30.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9402 May 11 19:45:36.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9402 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 11 19:45:37.126: INFO: stderr: "I0511 19:45:37.036397 2384 log.go:172] (0xc000b071e0) (0xc000ac6320) Create stream\nI0511 19:45:37.036452 2384 log.go:172] (0xc000b071e0) (0xc000ac6320) Stream added, broadcasting: 1\nI0511 19:45:37.040299 2384 log.go:172] (0xc000b071e0) Reply frame received for 1\nI0511 19:45:37.040346 2384 log.go:172] (0xc000b071e0) (0xc00050cf00) Create stream\nI0511 19:45:37.040358 2384 log.go:172] (0xc000b071e0) (0xc00050cf00) Stream added, broadcasting: 3\nI0511 19:45:37.041560 2384 log.go:172] (0xc000b071e0) Reply frame received for 3\nI0511 19:45:37.041585 2384 log.go:172] (0xc000b071e0) (0xc0003681e0) Create stream\nI0511 19:45:37.041592 2384 log.go:172] (0xc000b071e0) (0xc0003681e0) Stream added, broadcasting: 5\nI0511 19:45:37.042336 2384 log.go:172] (0xc000b071e0) Reply frame received for 5\nI0511 19:45:37.109635 2384 log.go:172] (0xc000b071e0) Data frame received for 5\nI0511 19:45:37.109672 2384 log.go:172] (0xc0003681e0) (5) Data frame handling\nI0511 19:45:37.109701 2384 log.go:172] (0xc0003681e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0511 19:45:37.115442 2384 log.go:172] (0xc000b071e0) Data frame received for 3\nI0511 19:45:37.115483 2384 log.go:172] (0xc00050cf00) (3) Data frame handling\nI0511 19:45:37.115522 2384 log.go:172] (0xc00050cf00) (3) Data frame sent\nI0511 19:45:37.116089 2384 log.go:172] (0xc000b071e0) Data frame received for 5\nI0511 19:45:37.116153 2384 log.go:172] (0xc0003681e0) (5) Data frame handling\nI0511 19:45:37.116196 2384 log.go:172] (0xc000b071e0) Data frame received for 3\nI0511 19:45:37.116219 2384 log.go:172] (0xc00050cf00) (3) Data frame handling\nI0511 19:45:37.121357 2384 log.go:172] (0xc000b071e0) Data frame received for 1\nI0511 19:45:37.121444 2384 log.go:172] (0xc000ac6320) (1) Data frame handling\nI0511 19:45:37.121465 2384 log.go:172] (0xc000ac6320) (1) Data frame sent\nI0511 19:45:37.121523 2384 log.go:172] (0xc000b071e0) (0xc000ac6320) Stream removed, broadcasting: 1\nI0511 19:45:37.121577 2384 log.go:172] (0xc000b071e0) Go away received\nI0511 19:45:37.121899 2384 log.go:172] (0xc000b071e0) (0xc000ac6320) Stream removed, broadcasting: 1\nI0511 19:45:37.121916 2384 log.go:172] (0xc000b071e0) (0xc00050cf00) Stream removed, broadcasting: 3\nI0511 19:45:37.121925 2384 log.go:172] (0xc000b071e0) (0xc0003681e0) Stream removed, broadcasting: 5\n" May 11 19:45:37.127: INFO: stdout: "iptables" May 11 19:45:37.127: INFO: proxyMode: iptables May 11 19:45:37.131: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 19:45:37.168: INFO: Pod kube-proxy-mode-detector still exists May 11 19:45:39.169: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 19:45:39.172: INFO: Pod kube-proxy-mode-detector still exists May 11 19:45:41.169: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 11 19:45:41.178: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-9402 STEP: creating replication controller affinity-nodeport-timeout in namespace services-9402 I0511 19:45:41.253670 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-9402, replica count: 3 I0511 19:45:44.304071 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:45:47.304295 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 19:45:47.325: INFO: Creating new exec pod May 11 19:45:52.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9402 execpod-affinitywbvlx -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 11 19:45:52.645: INFO: stderr: "I0511 19:45:52.548432 2404 log.go:172] (0xc00085e840) (0xc000a18000) Create stream\nI0511 19:45:52.548477 2404 log.go:172] (0xc00085e840) (0xc000a18000) Stream added, broadcasting: 1\nI0511 19:45:52.551007 2404 log.go:172] (0xc00085e840) Reply frame received for 1\nI0511 19:45:52.551082 2404 log.go:172] (0xc00085e840) (0xc000ac4000) Create stream\nI0511 19:45:52.551113 2404 log.go:172] (0xc00085e840) (0xc000ac4000) Stream added, broadcasting: 3\nI0511 19:45:52.552229 2404 log.go:172] (0xc00085e840) Reply frame received for 3\nI0511 19:45:52.552300 2404 log.go:172] (0xc00085e840) (0xc00066c5a0) Create stream\nI0511 19:45:52.552319 2404 log.go:172] (0xc00085e840) (0xc00066c5a0) Stream added, broadcasting: 5\nI0511 19:45:52.553338 2404 log.go:172] (0xc00085e840) Reply frame received for 5\nI0511 19:45:52.639433 2404 log.go:172] (0xc00085e840) Data frame received for 3\nI0511 19:45:52.639484 2404 log.go:172] (0xc000ac4000) (3) Data frame handling\nI0511 19:45:52.639517 2404 log.go:172] (0xc00085e840) Data frame received for 5\nI0511 19:45:52.639543 2404 log.go:172] (0xc00066c5a0) (5) Data frame handling\nI0511 19:45:52.639614 2404 log.go:172] (0xc00066c5a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0511 19:45:52.639646 2404 log.go:172] (0xc00085e840) Data frame received for 5\nI0511 19:45:52.639663 2404 log.go:172] (0xc00066c5a0) (5) Data frame handling\nI0511 19:45:52.640846 2404 log.go:172] (0xc00085e840) Data frame received for 1\nI0511 19:45:52.640861 2404 log.go:172] (0xc000a18000) (1) Data frame handling\nI0511 19:45:52.640874 2404 log.go:172] (0xc000a18000) (1) Data frame sent\nI0511 19:45:52.640881 2404 log.go:172] (0xc00085e840) (0xc000a18000) Stream removed, broadcasting: 1\nI0511 19:45:52.641073 2404 log.go:172] (0xc00085e840) Go away received\nI0511 19:45:52.641102 2404 log.go:172] (0xc00085e840) (0xc000a18000) Stream removed, broadcasting: 1\nI0511 19:45:52.641209 2404 log.go:172] (0xc00085e840) (0xc000ac4000) Stream removed, broadcasting: 3\nI0511 19:45:52.641219 2404 log.go:172] (0xc00085e840) (0xc00066c5a0) Stream removed, broadcasting: 5\n" May 11 19:45:52.645: INFO: stdout: "" May 11 19:45:52.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9402 execpod-affinitywbvlx -- /bin/sh -x -c nc -zv -t -w 2 10.107.130.19 80' May 11 19:45:52.849: INFO: stderr: "I0511 19:45:52.772843 2426 log.go:172] (0xc0006b0e70) (0xc00092c280) Create stream\nI0511 19:45:52.772885 2426 log.go:172] (0xc0006b0e70) (0xc00092c280) Stream added, broadcasting: 1\nI0511 19:45:52.778646 2426 log.go:172] (0xc0006b0e70) Reply frame received for 1\nI0511 19:45:52.778705 2426 log.go:172] (0xc0006b0e70) (0xc0006f85a0) Create stream\nI0511 19:45:52.778724 2426 log.go:172] (0xc0006b0e70) (0xc0006f85a0) Stream added, broadcasting: 3\nI0511 19:45:52.782135 2426 log.go:172] (0xc0006b0e70) Reply frame received for 3\nI0511 19:45:52.782153 2426 log.go:172] (0xc0006b0e70) (0xc0006eaa00) Create stream\nI0511 19:45:52.782161 2426 log.go:172] (0xc0006b0e70) (0xc0006eaa00) Stream added, broadcasting: 5\nI0511 19:45:52.783846 2426 log.go:172] (0xc0006b0e70) Reply frame received for 5\nI0511 19:45:52.844131 2426 log.go:172] (0xc0006b0e70) Data frame received for 3\nI0511 19:45:52.844160 2426 log.go:172] (0xc0006f85a0) (3) Data frame handling\nI0511 19:45:52.844189 2426 log.go:172] (0xc0006b0e70) Data frame received for 5\nI0511 19:45:52.844209 2426 log.go:172] (0xc0006eaa00) (5) Data frame handling\nI0511 19:45:52.844221 2426 log.go:172] (0xc0006eaa00) (5) Data frame sent\nI0511 19:45:52.844234 2426 log.go:172] (0xc0006b0e70) Data frame received for 5\nI0511 19:45:52.844240 2426 log.go:172] (0xc0006eaa00) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.130.19 80\nConnection to 10.107.130.19 80 port [tcp/http] succeeded!\nI0511 19:45:52.845492 2426 log.go:172] (0xc0006b0e70) Data frame received for 1\nI0511 19:45:52.845505 2426 log.go:172] (0xc00092c280) (1) Data frame handling\nI0511 19:45:52.845513 2426 log.go:172] (0xc00092c280) (1) Data frame sent\nI0511 19:45:52.845524 2426 log.go:172] (0xc0006b0e70) (0xc00092c280) Stream removed, broadcasting: 1\nI0511 19:45:52.845587 2426 log.go:172] (0xc0006b0e70) Go away received\nI0511 19:45:52.845817 2426 log.go:172] (0xc0006b0e70) (0xc00092c280) Stream removed, broadcasting: 1\nI0511 19:45:52.845831 2426 log.go:172] (0xc0006b0e70) (0xc0006f85a0) Stream removed, broadcasting: 3\nI0511 19:45:52.845838 2426 log.go:172] (0xc0006b0e70) (0xc0006eaa00) Stream removed, broadcasting: 5\n" May 11 19:45:52.849: INFO: stdout: "" May 11 19:45:52.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9402 execpod-affinitywbvlx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32250' May 11 19:45:53.053: INFO: stderr: "I0511 19:45:52.980780 2444 log.go:172] (0xc000a69290) (0xc000b045a0) Create stream\nI0511 19:45:52.980814 2444 log.go:172] (0xc000a69290) (0xc000b045a0) Stream added, broadcasting: 1\nI0511 19:45:52.984998 2444 log.go:172] (0xc000a69290) Reply frame received for 1\nI0511 19:45:52.985037 2444 log.go:172] (0xc000a69290) (0xc00084cdc0) Create stream\nI0511 19:45:52.985055 2444 log.go:172] (0xc000a69290) (0xc00084cdc0) Stream added, broadcasting: 3\nI0511 19:45:52.985960 2444 log.go:172] (0xc000a69290) Reply frame received for 3\nI0511 19:45:52.985994 2444 log.go:172] (0xc000a69290) (0xc0005c6140) Create stream\nI0511 19:45:52.986010 2444 log.go:172] (0xc000a69290) (0xc0005c6140) Stream added, broadcasting: 5\nI0511 19:45:52.986855 2444 log.go:172] (0xc000a69290) Reply frame received for 5\nI0511 19:45:53.048017 2444 log.go:172] (0xc000a69290) Data frame received for 5\nI0511 19:45:53.048044 2444 log.go:172] (0xc0005c6140) (5) Data frame handling\nI0511 19:45:53.048056 2444 log.go:172] (0xc0005c6140) (5) Data frame sent\nI0511 19:45:53.048066 2444 log.go:172] (0xc000a69290) Data frame received for 5\nI0511 19:45:53.048075 2444 log.go:172] (0xc0005c6140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32250\nConnection to 172.17.0.13 32250 port [tcp/32250] succeeded!\nI0511 19:45:53.048113 2444 log.go:172] (0xc000a69290) Data frame received for 3\nI0511 19:45:53.048146 2444 log.go:172] (0xc00084cdc0) (3) Data frame handling\nI0511 19:45:53.049565 2444 log.go:172] (0xc000a69290) Data frame received for 1\nI0511 19:45:53.049590 2444 log.go:172] (0xc000b045a0) (1) Data frame handling\nI0511 19:45:53.049608 2444 log.go:172] (0xc000b045a0) (1) Data frame sent\nI0511 19:45:53.049619 2444 log.go:172] (0xc000a69290) (0xc000b045a0) Stream removed, broadcasting: 1\nI0511 19:45:53.049666 2444 log.go:172] (0xc000a69290) Go away received\nI0511 19:45:53.049929 2444 log.go:172] (0xc000a69290) (0xc000b045a0) Stream removed, broadcasting: 1\nI0511 19:45:53.049952 2444 log.go:172] (0xc000a69290) (0xc00084cdc0) Stream removed, broadcasting: 3\nI0511 19:45:53.049966 2444 log.go:172] (0xc000a69290) (0xc0005c6140) Stream removed, broadcasting: 5\n" May 11 19:45:53.054: INFO: stdout: "" May 11 19:45:53.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9402 execpod-affinitywbvlx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32250' May 11 19:45:53.266: INFO: stderr: "I0511 19:45:53.193643 2466 log.go:172] (0xc0006d28f0) (0xc000386140) Create stream\nI0511 19:45:53.193703 2466 log.go:172] (0xc0006d28f0) (0xc000386140) Stream added, broadcasting: 1\nI0511 19:45:53.195817 2466 log.go:172] (0xc0006d28f0) Reply frame received for 1\nI0511 19:45:53.195856 2466 log.go:172] (0xc0006d28f0) (0xc00002ac80) Create stream\nI0511 19:45:53.195876 2466 log.go:172] (0xc0006d28f0) (0xc00002ac80) Stream added, broadcasting: 3\nI0511 19:45:53.196689 2466 log.go:172] (0xc0006d28f0) Reply frame received for 3\nI0511 19:45:53.196727 2466 log.go:172] (0xc0006d28f0) (0xc00002af00) Create stream\nI0511 19:45:53.196740 2466 log.go:172] (0xc0006d28f0) (0xc00002af00) Stream added, broadcasting: 5\nI0511 19:45:53.197621 2466 log.go:172] (0xc0006d28f0) Reply frame received for 5\nI0511 19:45:53.259555 2466 log.go:172] (0xc0006d28f0) Data frame received for 5\nI0511 19:45:53.259583 2466 log.go:172] (0xc00002af00) (5) Data frame handling\nI0511 19:45:53.259601 2466 log.go:172] (0xc00002af00) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32250\nI0511 19:45:53.259822 2466 log.go:172] (0xc0006d28f0) Data frame received for 5\nI0511 19:45:53.259852 2466 log.go:172] (0xc00002af00) (5) Data frame handling\nI0511 19:45:53.259883 2466 log.go:172] (0xc00002af00) (5) Data frame sent\nConnection to 172.17.0.12 32250 port [tcp/32250] succeeded!\nI0511 19:45:53.260174 2466 log.go:172] (0xc0006d28f0) Data frame received for 3\nI0511 19:45:53.260205 2466 log.go:172] (0xc00002ac80) (3) Data frame handling\nI0511 19:45:53.260236 2466 log.go:172] (0xc0006d28f0) Data frame received for 5\nI0511 19:45:53.260250 2466 log.go:172] (0xc00002af00) (5) Data frame handling\nI0511 19:45:53.261725 2466 log.go:172] (0xc0006d28f0) Data frame received for 1\nI0511 19:45:53.261754 2466 log.go:172] (0xc000386140) (1) Data frame handling\nI0511 19:45:53.261793 2466 log.go:172] (0xc000386140) (1) Data frame sent\nI0511 19:45:53.261816 2466 log.go:172] (0xc0006d28f0) (0xc000386140) Stream removed, broadcasting: 1\nI0511 19:45:53.261837 2466 log.go:172] (0xc0006d28f0) Go away received\nI0511 19:45:53.262247 2466 log.go:172] (0xc0006d28f0) (0xc000386140) Stream removed, broadcasting: 1\nI0511 19:45:53.262269 2466 log.go:172] (0xc0006d28f0) (0xc00002ac80) Stream removed, broadcasting: 3\nI0511 19:45:53.262279 2466 log.go:172] (0xc0006d28f0) (0xc00002af00) Stream removed, broadcasting: 5\n" May 11 19:45:53.266: INFO: stdout: "" May 11 19:45:53.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9402 execpod-affinitywbvlx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32250/ ; done' May 11 19:45:53.540: INFO: stderr: "I0511 19:45:53.403325 2487 log.go:172] (0xc000acbb80) (0xc00083b040) Create stream\nI0511 19:45:53.403365 2487 log.go:172] (0xc000acbb80) (0xc00083b040) Stream added, broadcasting: 1\nI0511 19:45:53.405451 2487 log.go:172] (0xc000acbb80) Reply frame received for 1\nI0511 19:45:53.405483 2487 log.go:172] (0xc000acbb80) (0xc0006cf0e0) Create stream\nI0511 19:45:53.405492 2487 log.go:172] (0xc000acbb80) (0xc0006cf0e0) Stream added, broadcasting: 3\nI0511 19:45:53.406335 2487 log.go:172] (0xc000acbb80) Reply frame received for 3\nI0511 19:45:53.406398 2487 log.go:172] (0xc000acbb80) (0xc00083b5e0) Create stream\nI0511 19:45:53.406431 2487 log.go:172] (0xc000acbb80) (0xc00083b5e0) Stream added, broadcasting: 5\nI0511 19:45:53.407181 2487 log.go:172] (0xc000acbb80) Reply frame received for 5\nI0511 19:45:53.458114 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.458161 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.458177 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.458196 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.458208 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.458226 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.462426 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.462454 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.462490 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.462764 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.462775 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.462782 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.462847 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.462861 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.462875 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.470389 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.470408 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.470431 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.470913 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.470934 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.470945 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.470963 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.470973 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.470986 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.474875 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.474893 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.474918 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.475528 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.475537 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.475543 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.475548 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.475552 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.475556 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.479893 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.479904 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.479914 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.480417 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.480435 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.480450 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.480461 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.480465 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.480470 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.484087 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.484102 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.484114 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.484454 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.484471 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.484477 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.484483 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.484487 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.484493 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.488673 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.488684 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.488693 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.489073 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.489100 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.489202 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.489218 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.489225 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.489234 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.494684 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.494702 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.494711 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.495228 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.495252 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.495258 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.495266 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.495270 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.495275 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.499308 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.499322 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.499333 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.499894 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.499902 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.499906 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.500009 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.500023 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.500037 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.503297 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.503311 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.503327 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.503609 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.503657 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.503669 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.503692 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.503713 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\nI0511 19:45:53.503730 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.509314 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.509323 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.509338 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.509869 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.509877 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.509884 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.509898 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.509905 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.509920 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.512725 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.512734 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.512741 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.513286 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.513296 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.513301 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.513325 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.513356 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.513375 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.517083 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.517092 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.517103 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.517614 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.517629 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.517646 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\nI0511 19:45:53.517658 2487 log.go:172] (0xc000acbb80) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI0511 19:45:53.517671 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.517690 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n 2 http://172.17.0.13:32250/\nI0511 19:45:53.517806 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.517816 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.517824 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.520951 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.520972 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.520986 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.521455 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.521470 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.521484 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\nI0511 19:45:53.521493 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.521504 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.521521 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\nI0511 19:45:53.521624 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.521641 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.521661 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.525460 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.525491 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.525511 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.526049 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.526064 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.526073 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.526196 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.526214 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.526237 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.530018 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.530027 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.530032 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.530490 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.530509 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.530530 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\nI0511 19:45:53.530549 2487 log.go:172] (0xc000acbb80) Data frame received for 5\n+ echo\nI0511 19:45:53.530560 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.530572 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.530585 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.530596 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.530608 2487 log.go:172] (0xc00083b5e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.534217 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.534235 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.534254 2487 log.go:172] (0xc0006cf0e0) (3) Data frame sent\nI0511 19:45:53.535101 2487 log.go:172] (0xc000acbb80) Data frame received for 5\nI0511 19:45:53.535116 2487 log.go:172] (0xc00083b5e0) (5) Data frame handling\nI0511 19:45:53.535130 2487 log.go:172] (0xc000acbb80) Data frame received for 3\nI0511 19:45:53.535147 2487 log.go:172] (0xc0006cf0e0) (3) Data frame handling\nI0511 19:45:53.536294 2487 log.go:172] (0xc000acbb80) Data frame received for 1\nI0511 19:45:53.536309 2487 log.go:172] (0xc00083b040) (1) Data frame handling\nI0511 19:45:53.536323 2487 log.go:172] (0xc00083b040) (1) Data frame sent\nI0511 19:45:53.536333 2487 log.go:172] (0xc000acbb80) (0xc00083b040) Stream removed, broadcasting: 1\nI0511 19:45:53.536354 2487 log.go:172] (0xc000acbb80) Go away received\nI0511 19:45:53.536696 2487 log.go:172] (0xc000acbb80) (0xc00083b040) Stream removed, broadcasting: 1\nI0511 19:45:53.536719 2487 log.go:172] (0xc000acbb80) (0xc0006cf0e0) Stream removed, broadcasting: 3\nI0511 19:45:53.536734 2487 log.go:172] (0xc000acbb80) (0xc00083b5e0) Stream removed, broadcasting: 5\n" May 11 19:45:53.540: INFO: stdout: "\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f\naffinity-nodeport-timeout-8l69f" May 11 19:45:53.540: INFO: Received response from host: May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Received response from host: affinity-nodeport-timeout-8l69f May 11 19:45:53.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9402 execpod-affinitywbvlx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32250/' May 11 19:45:53.733: INFO: stderr: "I0511 19:45:53.662203 2506 log.go:172] (0xc0009b54a0) (0xc000a4a640) Create stream\nI0511 19:45:53.662246 2506 log.go:172] (0xc0009b54a0) (0xc000a4a640) Stream added, broadcasting: 1\nI0511 19:45:53.666579 2506 log.go:172] (0xc0009b54a0) Reply frame received for 1\nI0511 19:45:53.666621 2506 log.go:172] (0xc0009b54a0) (0xc0007240a0) Create stream\nI0511 19:45:53.666637 2506 log.go:172] (0xc0009b54a0) (0xc0007240a0) Stream added, broadcasting: 3\nI0511 19:45:53.667254 2506 log.go:172] (0xc0009b54a0) Reply frame received for 3\nI0511 19:45:53.667292 2506 log.go:172] (0xc0009b54a0) (0xc0004e6f00) Create stream\nI0511 19:45:53.667316 2506 log.go:172] (0xc0009b54a0) (0xc0004e6f00) Stream added, broadcasting: 5\nI0511 19:45:53.667934 2506 log.go:172] (0xc0009b54a0) Reply frame received for 5\nI0511 19:45:53.725308 2506 log.go:172] (0xc0009b54a0) Data frame received for 5\nI0511 19:45:53.725334 2506 log.go:172] (0xc0004e6f00) (5) Data frame handling\nI0511 19:45:53.725352 2506 log.go:172] (0xc0004e6f00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:45:53.728243 2506 log.go:172] (0xc0009b54a0) Data frame received for 3\nI0511 19:45:53.728268 2506 log.go:172] (0xc0007240a0) (3) Data frame handling\nI0511 19:45:53.728301 2506 log.go:172] (0xc0007240a0) (3) Data frame sent\nI0511 19:45:53.728846 2506 log.go:172] (0xc0009b54a0) Data frame received for 5\nI0511 19:45:53.728867 2506 log.go:172] (0xc0004e6f00) (5) Data frame handling\nI0511 19:45:53.729050 2506 log.go:172] (0xc0009b54a0) Data frame received for 3\nI0511 19:45:53.729071 2506 log.go:172] (0xc0007240a0) (3) Data frame handling\nI0511 19:45:53.730148 2506 log.go:172] (0xc0009b54a0) Data frame received for 1\nI0511 19:45:53.730166 2506 log.go:172] (0xc000a4a640) (1) Data frame handling\nI0511 19:45:53.730182 2506 log.go:172] (0xc000a4a640) (1) Data frame sent\nI0511 19:45:53.730208 2506 log.go:172] (0xc0009b54a0) (0xc000a4a640) Stream removed, broadcasting: 1\nI0511 19:45:53.730226 2506 log.go:172] (0xc0009b54a0) Go away received\nI0511 19:45:53.730461 2506 log.go:172] (0xc0009b54a0) (0xc000a4a640) Stream removed, broadcasting: 1\nI0511 19:45:53.730478 2506 log.go:172] (0xc0009b54a0) (0xc0007240a0) Stream removed, broadcasting: 3\nI0511 19:45:53.730486 2506 log.go:172] (0xc0009b54a0) (0xc0004e6f00) Stream removed, broadcasting: 5\n" May 11 19:45:53.733: INFO: stdout: "affinity-nodeport-timeout-8l69f" May 11 19:46:08.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9402 execpod-affinitywbvlx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32250/' May 11 19:46:08.962: INFO: stderr: "I0511 19:46:08.856542 2525 log.go:172] (0xc0005aa210) (0xc0006481e0) Create stream\nI0511 19:46:08.856598 2525 log.go:172] (0xc0005aa210) (0xc0006481e0) Stream added, broadcasting: 1\nI0511 19:46:08.858959 2525 log.go:172] (0xc0005aa210) Reply frame received for 1\nI0511 19:46:08.858990 2525 log.go:172] (0xc0005aa210) (0xc00055ed20) Create stream\nI0511 19:46:08.859001 2525 log.go:172] (0xc0005aa210) (0xc00055ed20) Stream added, broadcasting: 3\nI0511 19:46:08.859816 2525 log.go:172] (0xc0005aa210) Reply frame received for 3\nI0511 19:46:08.859850 2525 log.go:172] (0xc0005aa210) (0xc00044e0a0) Create stream\nI0511 19:46:08.859864 2525 log.go:172] (0xc0005aa210) (0xc00044e0a0) Stream added, broadcasting: 5\nI0511 19:46:08.860483 2525 log.go:172] (0xc0005aa210) Reply frame received for 5\nI0511 19:46:08.952216 2525 log.go:172] (0xc0005aa210) Data frame received for 5\nI0511 19:46:08.952244 2525 log.go:172] (0xc00044e0a0) (5) Data frame handling\nI0511 19:46:08.952264 2525 log.go:172] (0xc00044e0a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32250/\nI0511 19:46:08.954929 2525 log.go:172] (0xc0005aa210) Data frame received for 3\nI0511 19:46:08.954958 2525 log.go:172] (0xc00055ed20) (3) Data frame handling\nI0511 19:46:08.954977 2525 log.go:172] (0xc00055ed20) (3) Data frame sent\nI0511 19:46:08.955842 2525 log.go:172] (0xc0005aa210) Data frame received for 3\nI0511 19:46:08.955869 2525 log.go:172] (0xc00055ed20) (3) Data frame handling\nI0511 19:46:08.955891 2525 log.go:172] (0xc0005aa210) Data frame received for 5\nI0511 19:46:08.955908 2525 log.go:172] (0xc00044e0a0) (5) Data frame handling\nI0511 19:46:08.957453 2525 log.go:172] (0xc0005aa210) Data frame received for 1\nI0511 19:46:08.957468 2525 log.go:172] (0xc0006481e0) (1) Data frame handling\nI0511 19:46:08.957477 2525 log.go:172] (0xc0006481e0) (1) Data frame sent\nI0511 19:46:08.957485 2525 log.go:172] (0xc0005aa210) (0xc0006481e0) Stream removed, broadcasting: 1\nI0511 19:46:08.957495 2525 log.go:172] (0xc0005aa210) Go away received\nI0511 19:46:08.957891 2525 log.go:172] (0xc0005aa210) (0xc0006481e0) Stream removed, broadcasting: 1\nI0511 19:46:08.957912 2525 log.go:172] (0xc0005aa210) (0xc00055ed20) Stream removed, broadcasting: 3\nI0511 19:46:08.957937 2525 log.go:172] (0xc0005aa210) (0xc00044e0a0) Stream removed, broadcasting: 5\n" May 11 19:46:08.962: INFO: stdout: "affinity-nodeport-timeout-jdcnf" May 11 19:46:08.962: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-9402, will wait for the garbage collector to delete the pods May 11 19:46:09.105: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.924773ms May 11 19:46:09.605: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.315354ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:46:25.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9402" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:55.556 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":150,"skipped":2383,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:46:25.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-f629b280-e10a-4483-9dc3-3bbd1c086283 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:46:26.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4492" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":151,"skipped":2390,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:46:26.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:46:26.166: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a" in namespace "projected-1400" to be "Succeeded or Failed" May 11 19:46:26.196: INFO: Pod "downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.041124ms May 11 19:46:28.387: INFO: Pod "downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221201702s May 11 19:46:30.391: INFO: Pod "downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a": Phase="Running", Reason="", readiness=true. Elapsed: 4.224980607s May 11 19:46:32.395: INFO: Pod "downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228609947s STEP: Saw pod success May 11 19:46:32.395: INFO: Pod "downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a" satisfied condition "Succeeded or Failed" May 11 19:46:32.398: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a container client-container: STEP: delete the pod May 11 19:46:32.451: INFO: Waiting for pod downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a to disappear May 11 19:46:32.525: INFO: Pod downwardapi-volume-8b4ce046-897d-414b-88df-2d47b6bd327a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:46:32.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1400" for this suite. • [SLOW TEST:6.480 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":152,"skipped":2390,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:46:32.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 11 19:46:32.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-467' May 11 19:46:33.059: INFO: stderr: "" May 11 19:46:33.059: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 19:46:33.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-467' May 11 19:46:33.154: INFO: stderr: "" May 11 19:46:33.154: INFO: stdout: "update-demo-nautilus-9bvfc update-demo-nautilus-lllrv " May 11 19:46:33.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bvfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-467' May 11 19:46:33.327: INFO: stderr: "" May 11 19:46:33.327: INFO: stdout: "" May 11 19:46:33.327: INFO: update-demo-nautilus-9bvfc is created but not running May 11 19:46:38.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-467' May 11 19:46:38.450: INFO: stderr: "" May 11 19:46:38.450: INFO: stdout: "update-demo-nautilus-9bvfc update-demo-nautilus-lllrv " May 11 19:46:38.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bvfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-467' May 11 19:46:38.546: INFO: stderr: "" May 11 19:46:38.546: INFO: stdout: "true" May 11 19:46:38.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bvfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-467' May 11 19:46:38.677: INFO: stderr: "" May 11 19:46:38.677: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 19:46:38.677: INFO: validating pod update-demo-nautilus-9bvfc May 11 19:46:38.687: INFO: got data: { "image": "nautilus.jpg" } May 11 19:46:38.687: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 19:46:38.687: INFO: update-demo-nautilus-9bvfc is verified up and running May 11 19:46:38.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lllrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-467' May 11 19:46:38.803: INFO: stderr: "" May 11 19:46:38.803: INFO: stdout: "true" May 11 19:46:38.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lllrv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-467' May 11 19:46:38.901: INFO: stderr: "" May 11 19:46:38.901: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 19:46:38.901: INFO: validating pod update-demo-nautilus-lllrv May 11 19:46:38.906: INFO: got data: { "image": "nautilus.jpg" } May 11 19:46:38.906: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 19:46:38.906: INFO: update-demo-nautilus-lllrv is verified up and running STEP: using delete to clean up resources May 11 19:46:38.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-467' May 11 19:46:39.082: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 19:46:39.082: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 19:46:39.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-467' May 11 19:46:39.181: INFO: stderr: "No resources found in kubectl-467 namespace.\n" May 11 19:46:39.181: INFO: stdout: "" May 11 19:46:39.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-467 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 19:46:39.339: INFO: stderr: "" May 11 19:46:39.339: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:46:39.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-467" for this suite. • [SLOW TEST:6.839 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":153,"skipped":2396,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:46:39.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 19:46:39.514: INFO: Waiting up to 5m0s for pod "pod-eaf1c977-1959-4ac9-a605-6e0727d09b76" in namespace "emptydir-9009" to be "Succeeded or Failed" May 11 19:46:39.518: INFO: Pod "pod-eaf1c977-1959-4ac9-a605-6e0727d09b76": Phase="Pending", Reason="", readiness=false. Elapsed: 3.752116ms May 11 19:46:41.579: INFO: Pod "pod-eaf1c977-1959-4ac9-a605-6e0727d09b76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065516501s May 11 19:46:43.604: INFO: Pod "pod-eaf1c977-1959-4ac9-a605-6e0727d09b76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0903876s May 11 19:46:45.904: INFO: Pod "pod-eaf1c977-1959-4ac9-a605-6e0727d09b76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.389609951s STEP: Saw pod success May 11 19:46:45.904: INFO: Pod "pod-eaf1c977-1959-4ac9-a605-6e0727d09b76" satisfied condition "Succeeded or Failed" May 11 19:46:46.131: INFO: Trying to get logs from node latest-worker2 pod pod-eaf1c977-1959-4ac9-a605-6e0727d09b76 container test-container: STEP: delete the pod May 11 19:46:46.590: INFO: Waiting for pod pod-eaf1c977-1959-4ac9-a605-6e0727d09b76 to disappear May 11 19:46:46.609: INFO: Pod pod-eaf1c977-1959-4ac9-a605-6e0727d09b76 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:46:46.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9009" for this suite. • [SLOW TEST:7.449 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":154,"skipped":2408,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:46:46.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 11 19:46:51.578: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9138 pod-service-account-a76825d7-5d98-4da6-8ec7-bdb64d3def6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 11 19:46:51.814: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9138 pod-service-account-a76825d7-5d98-4da6-8ec7-bdb64d3def6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 11 19:46:52.037: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9138 pod-service-account-a76825d7-5d98-4da6-8ec7-bdb64d3def6f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:46:52.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9138" for this suite. • [SLOW TEST:5.435 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":155,"skipped":2409,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:46:52.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:46:52.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-112b8966-154d-485e-9678-d2f85e1e8af9" in namespace "projected-3239" to be "Succeeded or Failed" May 11 19:46:52.395: INFO: Pod "downwardapi-volume-112b8966-154d-485e-9678-d2f85e1e8af9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.949622ms May 11 19:46:54.398: INFO: Pod "downwardapi-volume-112b8966-154d-485e-9678-d2f85e1e8af9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019416511s May 11 19:46:56.401: INFO: Pod "downwardapi-volume-112b8966-154d-485e-9678-d2f85e1e8af9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022680318s STEP: Saw pod success May 11 19:46:56.402: INFO: Pod "downwardapi-volume-112b8966-154d-485e-9678-d2f85e1e8af9" satisfied condition "Succeeded or Failed" May 11 19:46:56.403: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-112b8966-154d-485e-9678-d2f85e1e8af9 container client-container: STEP: delete the pod May 11 19:46:56.466: INFO: Waiting for pod downwardapi-volume-112b8966-154d-485e-9678-d2f85e1e8af9 to disappear May 11 19:46:56.717: INFO: Pod downwardapi-volume-112b8966-154d-485e-9678-d2f85e1e8af9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:46:56.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3239" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":156,"skipped":2409,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:46:56.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:47:01.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5501" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":157,"skipped":2413,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:47:01.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 11 19:47:01.228: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 11 19:47:02.168: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 11 19:47:04.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823222, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823222, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823222, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823222, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:47:06.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823222, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823222, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823222, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823222, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:47:09.164: INFO: Waited 544.263706ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:47:10.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2329" for this suite. • [SLOW TEST:9.666 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":158,"skipped":2420,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:47:10.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-gm6r STEP: Creating a pod to test atomic-volume-subpath May 11 19:47:11.158: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gm6r" in namespace "subpath-2638" to be "Succeeded or Failed" May 11 19:47:11.252: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Pending", Reason="", readiness=false. Elapsed: 94.403183ms May 11 19:47:13.472: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31477597s May 11 19:47:15.476: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 4.318708218s May 11 19:47:17.480: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 6.322311481s May 11 19:47:19.483: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 8.325640403s May 11 19:47:21.502: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 10.344714043s May 11 19:47:23.506: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 12.34823196s May 11 19:47:25.525: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 14.367816601s May 11 19:47:27.530: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 16.372040288s May 11 19:47:29.533: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 18.37574935s May 11 19:47:31.549: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 20.391196284s May 11 19:47:33.552: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Running", Reason="", readiness=true. Elapsed: 22.394743989s May 11 19:47:35.569: INFO: Pod "pod-subpath-test-downwardapi-gm6r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.411548419s STEP: Saw pod success May 11 19:47:35.569: INFO: Pod "pod-subpath-test-downwardapi-gm6r" satisfied condition "Succeeded or Failed" May 11 19:47:35.572: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-gm6r container test-container-subpath-downwardapi-gm6r: STEP: delete the pod May 11 19:47:36.110: INFO: Waiting for pod pod-subpath-test-downwardapi-gm6r to disappear May 11 19:47:36.138: INFO: Pod pod-subpath-test-downwardapi-gm6r no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-gm6r May 11 19:47:36.138: INFO: Deleting pod "pod-subpath-test-downwardapi-gm6r" in namespace "subpath-2638" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:47:36.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2638" for this suite. • [SLOW TEST:25.308 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":159,"skipped":2423,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:47:36.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:47:36.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 11 19:47:36.940: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T19:47:36Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T19:47:36Z]] name:name1 resourceVersion:3545201 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:44c6d1a2-cd29-4579-a6d1-0c3aca8c5396] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 11 19:47:46.946: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T19:47:46Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T19:47:46Z]] name:name2 resourceVersion:3545288 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cdb5bb0e-8c8d-445c-8d12-68d162fcdbf7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 11 19:47:56.952: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T19:47:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T19:47:56Z]] name:name1 resourceVersion:3545329 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:44c6d1a2-cd29-4579-a6d1-0c3aca8c5396] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 11 19:48:06.958: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T19:47:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T19:48:06Z]] name:name2 resourceVersion:3545367 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cdb5bb0e-8c8d-445c-8d12-68d162fcdbf7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 11 19:48:16.966: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T19:47:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T19:47:56Z]] name:name1 resourceVersion:3545407 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:44c6d1a2-cd29-4579-a6d1-0c3aca8c5396] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 11 19:48:27.064: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T19:47:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-11T19:48:06Z]] name:name2 resourceVersion:3545443 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:cdb5bb0e-8c8d-445c-8d12-68d162fcdbf7] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:48:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5681" for this suite. • [SLOW TEST:61.430 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":160,"skipped":2428,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:48:37.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-7911ec2e-3ac3-4325-a972-1549881ce08a STEP: Creating a pod to test consume configMaps May 11 19:48:37.646: INFO: Waiting up to 5m0s for pod "pod-configmaps-a957d472-52d6-437e-b1b4-5b8fbc541bc4" in namespace "configmap-2831" to be "Succeeded or Failed" May 11 19:48:37.736: INFO: Pod "pod-configmaps-a957d472-52d6-437e-b1b4-5b8fbc541bc4": Phase="Pending", Reason="", readiness=false. Elapsed: 89.726847ms May 11 19:48:39.739: INFO: Pod "pod-configmaps-a957d472-52d6-437e-b1b4-5b8fbc541bc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093039321s May 11 19:48:41.756: INFO: Pod "pod-configmaps-a957d472-52d6-437e-b1b4-5b8fbc541bc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10997603s STEP: Saw pod success May 11 19:48:41.756: INFO: Pod "pod-configmaps-a957d472-52d6-437e-b1b4-5b8fbc541bc4" satisfied condition "Succeeded or Failed" May 11 19:48:41.759: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a957d472-52d6-437e-b1b4-5b8fbc541bc4 container configmap-volume-test: STEP: delete the pod May 11 19:48:41.960: INFO: Waiting for pod pod-configmaps-a957d472-52d6-437e-b1b4-5b8fbc541bc4 to disappear May 11 19:48:42.014: INFO: Pod pod-configmaps-a957d472-52d6-437e-b1b4-5b8fbc541bc4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:48:42.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2831" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2442,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:48:42.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 19:48:42.374: INFO: Waiting up to 5m0s for pod "downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a" in namespace "downward-api-1018" to be "Succeeded or Failed" May 11 19:48:42.379: INFO: Pod "downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.67976ms May 11 19:48:44.387: INFO: Pod "downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012521839s May 11 19:48:46.391: INFO: Pod "downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a": Phase="Running", Reason="", readiness=true. Elapsed: 4.016396843s May 11 19:48:48.394: INFO: Pod "downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019855423s STEP: Saw pod success May 11 19:48:48.394: INFO: Pod "downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a" satisfied condition "Succeeded or Failed" May 11 19:48:48.397: INFO: Trying to get logs from node latest-worker2 pod downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a container dapi-container: STEP: delete the pod May 11 19:48:48.516: INFO: Waiting for pod downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a to disappear May 11 19:48:48.547: INFO: Pod downward-api-f6ee56d4-3f05-4c64-a525-35b304deb44a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:48:48.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1018" for this suite. • [SLOW TEST:6.537 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2456,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:48:48.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2516 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 19:48:48.661: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 11 19:48:48.823: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 19:48:50.936: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 19:48:52.826: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 19:48:54.835: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:48:56.826: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:48:58.830: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:49:00.827: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:49:02.826: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:49:04.827: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:49:06.827: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:49:08.828: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 19:49:10.826: INFO: The status of Pod netserver-0 is Running (Ready = true) May 11 19:49:10.831: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 11 19:49:14.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.250:8080/dial?request=hostname&protocol=http&host=10.244.1.150&port=8080&tries=1'] Namespace:pod-network-test-2516 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:49:14.895: INFO: >>> kubeConfig: /root/.kube/config I0511 19:49:14.923979 7 log.go:172] (0xc002c3fad0) (0xc000a13180) Create stream I0511 19:49:14.924005 7 log.go:172] (0xc002c3fad0) (0xc000a13180) Stream added, broadcasting: 1 I0511 19:49:14.925679 7 log.go:172] (0xc002c3fad0) Reply frame received for 1 I0511 19:49:14.925728 7 log.go:172] (0xc002c3fad0) (0xc00141f9a0) Create stream I0511 19:49:14.925746 7 log.go:172] (0xc002c3fad0) (0xc00141f9a0) Stream added, broadcasting: 3 I0511 19:49:14.926462 7 log.go:172] (0xc002c3fad0) Reply frame received for 3 I0511 19:49:14.926490 7 log.go:172] (0xc002c3fad0) (0xc000a13400) Create stream I0511 19:49:14.926498 7 log.go:172] (0xc002c3fad0) (0xc000a13400) Stream added, broadcasting: 5 I0511 19:49:14.927080 7 log.go:172] (0xc002c3fad0) Reply frame received for 5 I0511 19:49:14.981350 7 log.go:172] (0xc002c3fad0) Data frame received for 3 I0511 19:49:14.981413 7 log.go:172] (0xc00141f9a0) (3) Data frame handling I0511 19:49:14.981448 7 log.go:172] (0xc00141f9a0) (3) Data frame sent I0511 19:49:14.981924 7 log.go:172] (0xc002c3fad0) Data frame received for 5 I0511 19:49:14.981945 7 log.go:172] (0xc000a13400) (5) Data frame handling I0511 19:49:14.981962 7 log.go:172] (0xc002c3fad0) Data frame received for 3 I0511 19:49:14.981996 7 log.go:172] (0xc00141f9a0) (3) Data frame handling I0511 19:49:14.983289 7 log.go:172] (0xc002c3fad0) Data frame received for 1 I0511 19:49:14.983309 7 log.go:172] (0xc000a13180) (1) Data frame handling I0511 19:49:14.983326 7 log.go:172] (0xc000a13180) (1) Data frame sent I0511 19:49:14.983337 7 log.go:172] (0xc002c3fad0) (0xc000a13180) Stream removed, broadcasting: 1 I0511 19:49:14.983358 7 log.go:172] (0xc002c3fad0) Go away received I0511 19:49:14.983431 7 log.go:172] (0xc002c3fad0) (0xc000a13180) Stream removed, broadcasting: 1 I0511 19:49:14.983444 7 log.go:172] (0xc002c3fad0) (0xc00141f9a0) Stream removed, broadcasting: 3 I0511 19:49:14.983451 7 log.go:172] (0xc002c3fad0) (0xc000a13400) Stream removed, broadcasting: 5 May 11 19:49:14.983: INFO: Waiting for responses: map[] May 11 19:49:14.986: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.250:8080/dial?request=hostname&protocol=http&host=10.244.2.248&port=8080&tries=1'] Namespace:pod-network-test-2516 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:49:14.986: INFO: >>> kubeConfig: /root/.kube/config I0511 19:49:15.077025 7 log.go:172] (0xc0036d86e0) (0xc000427180) Create stream I0511 19:49:15.077105 7 log.go:172] (0xc0036d86e0) (0xc000427180) Stream added, broadcasting: 1 I0511 19:49:15.079134 7 log.go:172] (0xc0036d86e0) Reply frame received for 1 I0511 19:49:15.079175 7 log.go:172] (0xc0036d86e0) (0xc000a134a0) Create stream I0511 19:49:15.079192 7 log.go:172] (0xc0036d86e0) (0xc000a134a0) Stream added, broadcasting: 3 I0511 19:49:15.080093 7 log.go:172] (0xc0036d86e0) Reply frame received for 3 I0511 19:49:15.080132 7 log.go:172] (0xc0036d86e0) (0xc002dcf5e0) Create stream I0511 19:49:15.080147 7 log.go:172] (0xc0036d86e0) (0xc002dcf5e0) Stream added, broadcasting: 5 I0511 19:49:15.080912 7 log.go:172] (0xc0036d86e0) Reply frame received for 5 I0511 19:49:15.148614 7 log.go:172] (0xc0036d86e0) Data frame received for 3 I0511 19:49:15.148647 7 log.go:172] (0xc000a134a0) (3) Data frame handling I0511 19:49:15.148671 7 log.go:172] (0xc000a134a0) (3) Data frame sent I0511 19:49:15.149553 7 log.go:172] (0xc0036d86e0) Data frame received for 5 I0511 19:49:15.149579 7 log.go:172] (0xc002dcf5e0) (5) Data frame handling I0511 19:49:15.150269 7 log.go:172] (0xc0036d86e0) Data frame received for 3 I0511 19:49:15.150310 7 log.go:172] (0xc000a134a0) (3) Data frame handling I0511 19:49:15.151138 7 log.go:172] (0xc0036d86e0) Data frame received for 1 I0511 19:49:15.151152 7 log.go:172] (0xc000427180) (1) Data frame handling I0511 19:49:15.151168 7 log.go:172] (0xc000427180) (1) Data frame sent I0511 19:49:15.151178 7 log.go:172] (0xc0036d86e0) (0xc000427180) Stream removed, broadcasting: 1 I0511 19:49:15.151274 7 log.go:172] (0xc0036d86e0) Go away received I0511 19:49:15.151306 7 log.go:172] (0xc0036d86e0) (0xc000427180) Stream removed, broadcasting: 1 I0511 19:49:15.151326 7 log.go:172] (0xc0036d86e0) (0xc000a134a0) Stream removed, broadcasting: 3 I0511 19:49:15.151344 7 log.go:172] (0xc0036d86e0) (0xc002dcf5e0) Stream removed, broadcasting: 5 May 11 19:49:15.151: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:49:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2516" for this suite. • [SLOW TEST:26.597 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:49:15.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 19:49:15.280: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:49:15.351: INFO: Number of nodes with available pods: 0 May 11 19:49:15.351: INFO: Node latest-worker is running more than one daemon pod May 11 19:49:16.356: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:49:16.359: INFO: Number of nodes with available pods: 0 May 11 19:49:16.359: INFO: Node latest-worker is running more than one daemon pod May 11 19:49:17.547: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:49:17.622: INFO: Number of nodes with available pods: 0 May 11 19:49:17.622: INFO: Node latest-worker is running more than one daemon pod May 11 19:49:18.504: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:49:19.006: INFO: Number of nodes with available pods: 0 May 11 19:49:19.006: INFO: Node latest-worker is running more than one daemon pod May 11 19:49:19.390: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:49:19.393: INFO: Number of nodes with available pods: 0 May 11 19:49:19.393: INFO: Node latest-worker is running more than one daemon pod May 11 19:49:20.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:49:20.762: INFO: Number of nodes with available pods: 1 May 11 19:49:20.762: INFO: Node latest-worker2 is running more than one daemon pod May 11 19:49:21.467: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:49:21.659: INFO: Number of nodes with available pods: 2 May 11 19:49:21.659: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 11 19:49:22.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:49:22.222: INFO: Number of nodes with available pods: 2 May 11 19:49:22.222: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3126, will wait for the garbage collector to delete the pods May 11 19:49:24.060: INFO: Deleting DaemonSet.extensions daemon-set took: 9.814856ms May 11 19:49:24.460: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.214508ms May 11 19:49:35.892: INFO: Number of nodes with available pods: 0 May 11 19:49:35.892: INFO: Number of running nodes: 0, number of available pods: 0 May 11 19:49:35.895: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3126/daemonsets","resourceVersion":"3545924"},"items":null} May 11 19:49:35.908: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3126/pods","resourceVersion":"3545925"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:49:35.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3126" for this suite. • [SLOW TEST:20.803 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":164,"skipped":2512,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:49:35.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:49:36.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d" in namespace "projected-5033" to be "Succeeded or Failed" May 11 19:49:36.130: INFO: Pod "downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.594349ms May 11 19:49:38.210: INFO: Pod "downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092262398s May 11 19:49:40.251: INFO: Pod "downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133736146s May 11 19:49:42.256: INFO: Pod "downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13792189s STEP: Saw pod success May 11 19:49:42.256: INFO: Pod "downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d" satisfied condition "Succeeded or Failed" May 11 19:49:42.259: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d container client-container: STEP: delete the pod May 11 19:49:42.294: INFO: Waiting for pod downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d to disappear May 11 19:49:42.327: INFO: Pod downwardapi-volume-1b96cb6d-1b35-4de5-9475-0524166a6f7d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:49:42.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5033" for this suite. • [SLOW TEST:6.375 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2523,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:49:42.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 11 19:49:42.410: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 19:49:42.422: INFO: Waiting for terminating namespaces to be deleted... May 11 19:49:42.424: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 11 19:49:42.429: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 11 19:49:42.429: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 11 19:49:42.429: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 11 19:49:42.429: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 11 19:49:42.429: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 19:49:42.429: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:49:42.429: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 19:49:42.429: INFO: Container kube-proxy ready: true, restart count 0 May 11 19:49:42.429: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 11 19:49:42.433: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 11 19:49:42.433: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 11 19:49:42.433: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 19:49:42.433: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:49:42.433: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 19:49:42.433: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 11 19:49:42.490: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 11 19:49:42.490: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 11 19:49:42.490: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 11 19:49:42.490: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 11 19:49:42.490: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 11 19:49:42.490: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 11 19:49:42.497: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-93a13c44-98ee-4a4b-b94f-4692a9d260e7.160e112820f26178], Reason = [Scheduled], Message = [Successfully assigned sched-pred-595/filler-pod-93a13c44-98ee-4a4b-b94f-4692a9d260e7 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-93a13c44-98ee-4a4b-b94f-4692a9d260e7.160e1128a59f210c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-93a13c44-98ee-4a4b-b94f-4692a9d260e7.160e1128f77da147], Reason = [Created], Message = [Created container filler-pod-93a13c44-98ee-4a4b-b94f-4692a9d260e7] STEP: Considering event: Type = [Normal], Name = [filler-pod-93a13c44-98ee-4a4b-b94f-4692a9d260e7.160e1129094c1726], Reason = [Started], Message = [Started container filler-pod-93a13c44-98ee-4a4b-b94f-4692a9d260e7] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd325cae-bb8b-4bce-b2a1-e3391abda877.160e112821304c36], Reason = [Scheduled], Message = [Successfully assigned sched-pred-595/filler-pod-cd325cae-bb8b-4bce-b2a1-e3391abda877 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd325cae-bb8b-4bce-b2a1-e3391abda877.160e11286dda4281], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd325cae-bb8b-4bce-b2a1-e3391abda877.160e1128d68ed2d5], Reason = [Created], Message = [Created container filler-pod-cd325cae-bb8b-4bce-b2a1-e3391abda877] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd325cae-bb8b-4bce-b2a1-e3391abda877.160e1128ec4b09b5], Reason = [Started], Message = [Started container filler-pod-cd325cae-bb8b-4bce-b2a1-e3391abda877] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e1129899aa6e0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e11298b3bb132], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:49:49.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-595" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.346 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":166,"skipped":2527,"failed":0} S ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:49:49.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:49:49.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9756" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":167,"skipped":2528,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:49:49.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:49:50.533: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:49:52.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823390, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823390, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823390, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823390, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:49:56.031: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:49:56.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8032" for this suite. STEP: Destroying namespace "webhook-8032-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.047 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":168,"skipped":2528,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:49:56.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 11 19:49:57.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 11 19:49:57.500: INFO: stderr: "" May 11 19:49:57.500: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:49:57.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7891" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":169,"skipped":2536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:49:57.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:50:25.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4754" for this suite. STEP: Destroying namespace "nsdeletetest-3649" for this suite. May 11 19:50:25.181: INFO: Namespace nsdeletetest-3649 was already deleted STEP: Destroying namespace "nsdeletetest-4649" for this suite. • [SLOW TEST:27.677 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":170,"skipped":2577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:50:25.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:50:25.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1168" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":171,"skipped":2617,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:50:25.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:50:43.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7355" for this suite. • [SLOW TEST:18.094 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":172,"skipped":2628,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:50:43.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 11 19:50:43.729: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 19:50:43.770: INFO: Waiting for terminating namespaces to be deleted... May 11 19:50:43.918: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 11 19:50:43.922: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 11 19:50:43.922: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 11 19:50:43.922: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 11 19:50:43.922: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 11 19:50:43.922: INFO: fail-once-local-4zg5q from job-7355 started at 2020-05-11 19:50:25 +0000 UTC (1 container statuses recorded) May 11 19:50:43.922: INFO: Container c ready: false, restart count 1 May 11 19:50:43.922: INFO: fail-once-local-cdvhf from job-7355 started at 2020-05-11 19:50:34 +0000 UTC (1 container statuses recorded) May 11 19:50:43.922: INFO: Container c ready: false, restart count 1 May 11 19:50:43.922: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 19:50:43.922: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:50:43.922: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 19:50:43.922: INFO: Container kube-proxy ready: true, restart count 0 May 11 19:50:43.922: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 11 19:50:43.931: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 11 19:50:43.931: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 11 19:50:43.931: INFO: rally-c84651be-xrjuifax from c-rally-c84651be-475cto40 started at 2020-05-11 19:50:03 +0000 UTC (1 container statuses recorded) May 11 19:50:43.931: INFO: Container rally-c84651be-xrjuifax ready: false, restart count 0 May 11 19:50:43.931: INFO: fail-once-local-qt7bp from job-7355 started at 2020-05-11 19:50:34 +0000 UTC (1 container statuses recorded) May 11 19:50:43.931: INFO: Container c ready: false, restart count 1 May 11 19:50:43.931: INFO: fail-once-local-wqct8 from job-7355 started at 2020-05-11 19:50:25 +0000 UTC (1 container statuses recorded) May 11 19:50:43.931: INFO: Container c ready: false, restart count 1 May 11 19:50:43.931: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 19:50:43.931: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:50:43.931: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 19:50:43.931: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8e9ecbab-1998-4a86-8c42-3d839bb95d89 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-8e9ecbab-1998-4a86-8c42-3d839bb95d89 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8e9ecbab-1998-4a86-8c42-3d839bb95d89 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:51:02.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1795" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:19.165 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":173,"skipped":2634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:51:02.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:51:02.652: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:51:03.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5835" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":174,"skipped":2659,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:51:03.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-86434523-4373-440f-b048-083cb046aaee STEP: Creating configMap with name cm-test-opt-upd-309d9acd-74d3-4c77-8b49-cd3f6b857896 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-86434523-4373-440f-b048-083cb046aaee STEP: Updating configmap cm-test-opt-upd-309d9acd-74d3-4c77-8b49-cd3f6b857896 STEP: Creating configMap with name cm-test-opt-create-eb028696-8763-452a-9448-91dd42ff6cb6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:52:35.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7372" for this suite. • [SLOW TEST:91.873 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":2665,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:52:35.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:52:35.391: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833" in namespace "downward-api-1158" to be "Succeeded or Failed" May 11 19:52:35.472: INFO: Pod "downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833": Phase="Pending", Reason="", readiness=false. Elapsed: 81.074581ms May 11 19:52:37.475: INFO: Pod "downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084687346s May 11 19:52:39.498: INFO: Pod "downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10701711s May 11 19:52:41.542: INFO: Pod "downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151248011s STEP: Saw pod success May 11 19:52:41.542: INFO: Pod "downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833" satisfied condition "Succeeded or Failed" May 11 19:52:41.545: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833 container client-container: STEP: delete the pod May 11 19:52:41.705: INFO: Waiting for pod downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833 to disappear May 11 19:52:41.911: INFO: Pod downwardapi-volume-2f7ea14d-b2c3-4a39-9827-44a4c8e59833 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:52:41.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1158" for this suite. • [SLOW TEST:6.710 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":176,"skipped":2694,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:52:41.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:52:48.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3233" for this suite. • [SLOW TEST:6.754 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":177,"skipped":2710,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:52:48.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:53:05.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5748" for this suite. • [SLOW TEST:17.503 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":178,"skipped":2725,"failed":0} [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:53:06.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:53:06.407: INFO: Creating deployment "webserver-deployment" May 11 19:53:06.427: INFO: Waiting for observed generation 1 May 11 19:53:08.529: INFO: Waiting for all required pods to come up May 11 19:53:08.534: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 11 19:53:20.742: INFO: Waiting for deployment "webserver-deployment" to complete May 11 19:53:20.747: INFO: Updating deployment "webserver-deployment" with a non-existent image May 11 19:53:20.754: INFO: Updating deployment webserver-deployment May 11 19:53:20.754: INFO: Waiting for observed generation 2 May 11 19:53:22.771: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 11 19:53:22.773: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 11 19:53:22.775: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 11 19:53:22.782: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 11 19:53:22.782: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 11 19:53:22.784: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 11 19:53:22.788: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 11 19:53:22.788: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 11 19:53:22.795: INFO: Updating deployment webserver-deployment May 11 19:53:22.795: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 11 19:53:23.500: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 11 19:53:23.968: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 19:53:24.735: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6602 /apis/apps/v1/namespaces/deployment-6602/deployments/webserver-deployment 597e3830-a5e8-40b9-b921-c2d1077fd579 3547839 3 2020-05-11 19:53:06 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-11 19:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044d79e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-11 19:53:21 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 19:53:23 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 11 19:53:25.451: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-6602 /apis/apps/v1/namespaces/deployment-6602/replicasets/webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 3547895 3 2020-05-11 19:53:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 597e3830-a5e8-40b9-b921-c2d1077fd579 0xc0044d7e87 0xc0044d7e88}] [] [{kube-controller-manager Update apps/v1 2020-05-11 19:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"597e3830-a5e8-40b9-b921-c2d1077fd579\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044d7f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 19:53:25.451: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 11 19:53:25.451: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-6602 /apis/apps/v1/namespaces/deployment-6602/replicasets/webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 3547880 3 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 597e3830-a5e8-40b9-b921-c2d1077fd579 0xc0044d7f67 0xc0044d7f68}] [] [{kube-controller-manager Update apps/v1 2020-05-11 19:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"597e3830-a5e8-40b9-b921-c2d1077fd579\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0044d7fd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 11 19:53:25.683: INFO: Pod "webserver-deployment-6676bcd6d4-2qrlm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2qrlm webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-2qrlm d5cd5e87-cb93-473b-8cc1-2726a84a704d 3547867 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441cb37 0xc00441cb38}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.683: INFO: Pod "webserver-deployment-6676bcd6d4-4lftr" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4lftr webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-4lftr bb4d14a4-0cbd-4ba2-97bb-15c6e7185326 3547877 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441cc77 0xc00441cc78}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.683: INFO: Pod "webserver-deployment-6676bcd6d4-9kmp2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9kmp2 webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-9kmp2 d6c718d7-b9a2-4d8d-b892-ebb8ecc85bcc 3547855 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441cdb7 0xc00441cdb8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.684: INFO: Pod "webserver-deployment-6676bcd6d4-dk5q8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dk5q8 webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-dk5q8 2388ab30-0f25-4d48-a930-85e6b769cb90 3547806 0 2020-05-11 19:53:21 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441cef7 0xc00441cef8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 19:53:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.684: INFO: Pod "webserver-deployment-6676bcd6d4-fn298" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fn298 webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-fn298 cbf8dd94-f210-47a2-ac6f-4f3ab308251b 3547783 0 2020-05-11 19:53:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441d0c7 0xc00441d0c8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-11 19:53:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.684: INFO: Pod "webserver-deployment-6676bcd6d4-j8gtt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-j8gtt webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-j8gtt 4accaf8c-d8c8-49f5-a717-886a6670ed8d 3547874 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441d277 0xc00441d278}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.685: INFO: Pod "webserver-deployment-6676bcd6d4-jpqbq" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jpqbq webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-jpqbq d6a12f07-899c-49df-84be-572ace993b67 3547873 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441d3b7 0xc00441d3b8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.685: INFO: Pod "webserver-deployment-6676bcd6d4-lktj9" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lktj9 webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-lktj9 67c48350-062b-4403-a4a5-60831d544e20 3547844 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441d4f7 0xc00441d4f8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.685: INFO: Pod "webserver-deployment-6676bcd6d4-qpds8" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qpds8 webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-qpds8 b8c7d60d-eeac-480d-8132-a9579d5ca339 3547882 0 2020-05-11 19:53:24 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441d637 0xc00441d638}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.686: INFO: Pod "webserver-deployment-6676bcd6d4-qr6v7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qr6v7 webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-qr6v7 a4fd1a36-f5e3-49de-ae4f-cb4e59cbeb55 3547779 0 2020-05-11 19:53:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441d777 0xc00441d778}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 19:53:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.686: INFO: Pod "webserver-deployment-6676bcd6d4-slmjq" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-slmjq webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-slmjq 4e25461c-fa47-4b78-9794-9c7558e33f55 3547907 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441d937 0xc00441d938}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 19:53:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.686: INFO: Pod "webserver-deployment-6676bcd6d4-vgm47" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vgm47 webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-vgm47 8f19ad39-afdb-4b5b-b6ae-6f6050af33e1 3547802 0 2020-05-11 19:53:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441dae7 0xc00441dae8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 19:53:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.686: INFO: Pod "webserver-deployment-6676bcd6d4-x7sgb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-x7sgb webserver-deployment-6676bcd6d4- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-6676bcd6d4-x7sgb db3752b5-7007-489e-90f5-6d50bae40b45 3547793 0 2020-05-11 19:53:20 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 a2faefe8-167b-4a41-8d32-b2c472d624b2 0xc00441dc97 0xc00441dc98}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2faefe8-167b-4a41-8d32-b2c472d624b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 19:53:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.686: INFO: Pod "webserver-deployment-84855cf797-5497r" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5497r webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-5497r 4953d17d-3f00-4bd7-b219-8807c8ad3422 3547686 0 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc00441de47 0xc00441de48}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.161\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.161,StartTime:2020-05-11 19:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:53:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e59ec35532bf4c7b1c3922eb5b335635e1302ae648fba178a2d109f4f336cec6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.686: INFO: Pod "webserver-deployment-84855cf797-64kgd" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-64kgd webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-64kgd 1d5dd98d-a8a8-422b-94fb-d9a10af3ebe2 3547751 0 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc00441dff7 0xc00441dff8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.165\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.165,StartTime:2020-05-11 19:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:53:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://86534f87c7573f0c89136d988aaddb435037bf45d5d9802e305e872c6260a9f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.687: INFO: Pod "webserver-deployment-84855cf797-7mvgk" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7mvgk webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-7mvgk c73d8e89-c2a9-43e0-8a59-6ea09f94c80f 3547845 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b542a7 0xc002b542a8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.687: INFO: Pod "webserver-deployment-84855cf797-7q94v" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7q94v webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-7q94v a68b37a1-ff44-4ecd-8166-c82939aeebc6 3547900 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b543e7 0xc002b543e8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-11 19:53:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.687: INFO: Pod "webserver-deployment-84855cf797-99m25" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-99m25 webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-99m25 82fd4a07-7d29-4f5c-a797-ea07c2239f11 3547872 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b54787 0xc002b54788}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.687: INFO: Pod "webserver-deployment-84855cf797-cdhtq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cdhtq webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-cdhtq 52a2cc23-dc1f-4368-bbc0-b2f241148a19 3547665 0 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b548b7 0xc002b548b8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.16,StartTime:2020-05-11 19:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:53:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://efcb5660b818a987cd5383e2833f051b724b5fd7a0a3f9b0d182d500fe2978bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.687: INFO: Pod "webserver-deployment-84855cf797-ct59c" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ct59c webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-ct59c 402bbeae-cdda-4a84-bbe2-696bbc9fc3d9 3547878 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b54a67 0xc002b54a68}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.687: INFO: Pod "webserver-deployment-84855cf797-dlqxj" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dlqxj webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-dlqxj a6e5a1d2-5885-4ea1-a50d-8fe7a06543c7 3547733 0 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b54b97 0xc002b54b98}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.163\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.163,StartTime:2020-05-11 19:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:53:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e7cd211b9af2b3a4e9ebd37a7da12103fad7e95a9e6a7387cda1ee45cb81ab91,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.687: INFO: Pod "webserver-deployment-84855cf797-f6j85" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-f6j85 webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-f6j85 08b55946-08b0-4529-abb1-9f9dfc9aa386 3547879 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b54d47 0xc002b54d48}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.688: INFO: Pod "webserver-deployment-84855cf797-ks4hn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ks4hn webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-ks4hn c4edfae5-0895-4c11-8df9-e920c47bdeaf 3547883 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b54e77 0xc002b54e78}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-11 19:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.688: INFO: Pod "webserver-deployment-84855cf797-lk2bz" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lk2bz webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-lk2bz b22baaa4-b0c1-4599-96d9-55fe63c5bcd2 3547696 0 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b55007 0xc002b55008}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.17\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.17,StartTime:2020-05-11 19:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:53:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8ff0be0748b61a073b1bc35d64100840966b57ed9a0263b688598fc9bedec1b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.688: INFO: Pod "webserver-deployment-84855cf797-m2899" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m2899 webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-m2899 3507fb1d-9a06-4f9b-a37d-8c46cfca0622 3547875 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b551b7 0xc002b551b8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.688: INFO: Pod "webserver-deployment-84855cf797-n8cb2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-n8cb2 webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-n8cb2 31ea0ce3-0b30-4654-9d3e-18ca1fd92022 3547861 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b552e7 0xc002b552e8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.688: INFO: Pod "webserver-deployment-84855cf797-nc99v" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nc99v webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-nc99v 7cfc26bf-493e-4bd7-88ff-dd5dcffe8c18 3547719 0 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b556d7 0xc002b556d8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.19,StartTime:2020-05-11 19:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:53:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c913c63a79306f081ad89873fec0374a48b8a8627ffe263279c8bedb6f615574,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.689: INFO: Pod "webserver-deployment-84855cf797-rq69z" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rq69z webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-rq69z bea0195f-bdf1-48f4-8831-1828028eb98b 3547851 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b55af7 0xc002b55af8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.689: INFO: Pod "webserver-deployment-84855cf797-skhgs" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-skhgs webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-skhgs 2bca7701-630d-4957-b431-8eedd8cb7a14 3547847 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b55c27 0xc002b55c28}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.689: INFO: Pod "webserver-deployment-84855cf797-srhcq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-srhcq webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-srhcq 9a010847-fd5b-4a2c-af51-e36cbd8d3ece 3547726 0 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002b55d57 0xc002b55d58}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.162\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.162,StartTime:2020-05-11 19:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:53:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0c418082593b2e5360ec4af31711dd6455b5e478498936c141300a55228b5c4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.689: INFO: Pod "webserver-deployment-84855cf797-t6rwz" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-t6rwz webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-t6rwz 4b1e4dd3-3b65-426d-a0f6-894eb72f591b 3547876 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002d88527 0xc002d88528}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.689: INFO: Pod "webserver-deployment-84855cf797-t8fvb" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-t8fvb webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-t8fvb 61c41298-651e-4398-9bcd-c9362689bda9 3547741 0 2020-05-11 19:53:06 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002d88827 0xc002d88828}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.164\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.164,StartTime:2020-05-11 19:53:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 19:53:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1b67a5e435ea53951972130059c628af196bd978de2597d09307f64975828637,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 19:53:25.689: INFO: Pod "webserver-deployment-84855cf797-znvfd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-znvfd webserver-deployment-84855cf797- deployment-6602 /api/v1/namespaces/deployment-6602/pods/webserver-deployment-84855cf797-znvfd d72ce68e-c532-43da-ab92-b2f698d37482 3547897 0 2020-05-11 19:53:23 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 18efaf40-11f3-4e77-b714-4568f9f3efe3 0xc002d88be7 0xc002d88be8}] [] [{kube-controller-manager Update v1 2020-05-11 19:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18efaf40-11f3-4e77-b714-4568f9f3efe3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 19:53:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f4sgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f4sgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f4sgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 19:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-11 19:53:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:53:25.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6602" for this suite. • [SLOW TEST:21.137 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":179,"skipped":2725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:53:27.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:53:41.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1183" for this suite. • [SLOW TEST:14.632 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":180,"skipped":2761,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:53:41.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-nr8r STEP: Creating a pod to test atomic-volume-subpath May 11 19:53:42.178: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nr8r" in namespace "subpath-7818" to be "Succeeded or Failed" May 11 19:53:42.218: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Pending", Reason="", readiness=false. Elapsed: 40.05148ms May 11 19:53:44.340: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16199046s May 11 19:53:46.500: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 4.322472049s May 11 19:53:48.691: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 6.513924485s May 11 19:53:51.044: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 8.866912714s May 11 19:53:53.092: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 10.914472565s May 11 19:53:55.096: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 12.918242128s May 11 19:53:57.215: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 15.03740815s May 11 19:53:59.266: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 17.088598229s May 11 19:54:01.269: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 19.091650809s May 11 19:54:03.272: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 21.094004029s May 11 19:54:05.275: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Running", Reason="", readiness=true. Elapsed: 23.097835027s May 11 19:54:07.404: INFO: Pod "pod-subpath-test-projected-nr8r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.226018573s STEP: Saw pod success May 11 19:54:07.404: INFO: Pod "pod-subpath-test-projected-nr8r" satisfied condition "Succeeded or Failed" May 11 19:54:07.406: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-nr8r container test-container-subpath-projected-nr8r: STEP: delete the pod May 11 19:54:07.597: INFO: Waiting for pod pod-subpath-test-projected-nr8r to disappear May 11 19:54:07.670: INFO: Pod pod-subpath-test-projected-nr8r no longer exists STEP: Deleting pod pod-subpath-test-projected-nr8r May 11 19:54:07.670: INFO: Deleting pod "pod-subpath-test-projected-nr8r" in namespace "subpath-7818" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:54:07.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7818" for this suite. • [SLOW TEST:25.736 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":181,"skipped":2769,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:54:07.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d46c319c-361f-4f24-b2e0-6c52d0534c78 STEP: Creating a pod to test consume secrets May 11 19:54:08.041: INFO: Waiting up to 5m0s for pod "pod-secrets-8db23af8-3e54-41a7-aba1-350af0e49af9" in namespace "secrets-4348" to be "Succeeded or Failed" May 11 19:54:08.206: INFO: Pod "pod-secrets-8db23af8-3e54-41a7-aba1-350af0e49af9": Phase="Pending", Reason="", readiness=false. Elapsed: 165.578914ms May 11 19:54:10.246: INFO: Pod "pod-secrets-8db23af8-3e54-41a7-aba1-350af0e49af9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20579414s May 11 19:54:12.250: INFO: Pod "pod-secrets-8db23af8-3e54-41a7-aba1-350af0e49af9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209570943s STEP: Saw pod success May 11 19:54:12.250: INFO: Pod "pod-secrets-8db23af8-3e54-41a7-aba1-350af0e49af9" satisfied condition "Succeeded or Failed" May 11 19:54:12.254: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-8db23af8-3e54-41a7-aba1-350af0e49af9 container secret-volume-test: STEP: delete the pod May 11 19:54:12.393: INFO: Waiting for pod pod-secrets-8db23af8-3e54-41a7-aba1-350af0e49af9 to disappear May 11 19:54:12.408: INFO: Pod pod-secrets-8db23af8-3e54-41a7-aba1-350af0e49af9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:54:12.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4348" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":2778,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:54:12.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8762 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8762 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8762 May 11 19:54:12.576: INFO: Found 0 stateful pods, waiting for 1 May 11 19:54:22.581: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 11 19:54:22.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8762 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 19:54:31.115: INFO: stderr: "I0511 19:54:30.947055 2843 log.go:172] (0xc000bcaa50) (0xc00085a780) Create stream\nI0511 19:54:30.947107 2843 log.go:172] (0xc000bcaa50) (0xc00085a780) Stream added, broadcasting: 1\nI0511 19:54:30.949861 2843 log.go:172] (0xc000bcaa50) Reply frame received for 1\nI0511 19:54:30.949898 2843 log.go:172] (0xc000bcaa50) (0xc000306000) Create stream\nI0511 19:54:30.949908 2843 log.go:172] (0xc000bcaa50) (0xc000306000) Stream added, broadcasting: 3\nI0511 19:54:30.950952 2843 log.go:172] (0xc000bcaa50) Reply frame received for 3\nI0511 19:54:30.950992 2843 log.go:172] (0xc000bcaa50) (0xc000306e60) Create stream\nI0511 19:54:30.951013 2843 log.go:172] (0xc000bcaa50) (0xc000306e60) Stream added, broadcasting: 5\nI0511 19:54:30.951977 2843 log.go:172] (0xc000bcaa50) Reply frame received for 5\nI0511 19:54:31.028937 2843 log.go:172] (0xc000bcaa50) Data frame received for 5\nI0511 19:54:31.028955 2843 log.go:172] (0xc000306e60) (5) Data frame handling\nI0511 19:54:31.028964 2843 log.go:172] (0xc000306e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 19:54:31.109744 2843 log.go:172] (0xc000bcaa50) Data frame received for 5\nI0511 19:54:31.109788 2843 log.go:172] (0xc000306e60) (5) Data frame handling\nI0511 19:54:31.109810 2843 log.go:172] (0xc000bcaa50) Data frame received for 3\nI0511 19:54:31.109828 2843 log.go:172] (0xc000306000) (3) Data frame handling\nI0511 19:54:31.109846 2843 log.go:172] (0xc000306000) (3) Data frame sent\nI0511 19:54:31.109857 2843 log.go:172] (0xc000bcaa50) Data frame received for 3\nI0511 19:54:31.109880 2843 log.go:172] (0xc000306000) (3) Data frame handling\nI0511 19:54:31.110956 2843 log.go:172] (0xc000bcaa50) Data frame received for 1\nI0511 19:54:31.110968 2843 log.go:172] (0xc00085a780) (1) Data frame handling\nI0511 19:54:31.110980 2843 log.go:172] (0xc00085a780) (1) Data frame sent\nI0511 19:54:31.110989 2843 log.go:172] (0xc000bcaa50) (0xc00085a780) Stream removed, broadcasting: 1\nI0511 19:54:31.111002 2843 log.go:172] (0xc000bcaa50) Go away received\nI0511 19:54:31.111289 2843 log.go:172] (0xc000bcaa50) (0xc00085a780) Stream removed, broadcasting: 1\nI0511 19:54:31.111311 2843 log.go:172] (0xc000bcaa50) (0xc000306000) Stream removed, broadcasting: 3\nI0511 19:54:31.111321 2843 log.go:172] (0xc000bcaa50) (0xc000306e60) Stream removed, broadcasting: 5\n" May 11 19:54:31.115: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 19:54:31.115: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 19:54:31.135: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 19:54:41.145: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 19:54:41.145: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:54:41.255: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999446s May 11 19:54:42.326: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.898909327s May 11 19:54:43.338: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.827803916s May 11 19:54:44.343: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.815822537s May 11 19:54:45.346: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.810560958s May 11 19:54:46.350: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.808048849s May 11 19:54:47.354: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.803885543s May 11 19:54:48.357: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.800168617s May 11 19:54:49.361: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.796435574s May 11 19:54:50.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 792.817488ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8762 May 11 19:54:51.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8762 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:54:51.606: INFO: stderr: "I0511 19:54:51.512178 2879 log.go:172] (0xc000a91760) (0xc000b88320) Create stream\nI0511 19:54:51.512243 2879 log.go:172] (0xc000a91760) (0xc000b88320) Stream added, broadcasting: 1\nI0511 19:54:51.516206 2879 log.go:172] (0xc000a91760) Reply frame received for 1\nI0511 19:54:51.516280 2879 log.go:172] (0xc000a91760) (0xc000362320) Create stream\nI0511 19:54:51.516304 2879 log.go:172] (0xc000a91760) (0xc000362320) Stream added, broadcasting: 3\nI0511 19:54:51.517432 2879 log.go:172] (0xc000a91760) Reply frame received for 3\nI0511 19:54:51.517471 2879 log.go:172] (0xc000a91760) (0xc0005021e0) Create stream\nI0511 19:54:51.517483 2879 log.go:172] (0xc000a91760) (0xc0005021e0) Stream added, broadcasting: 5\nI0511 19:54:51.518373 2879 log.go:172] (0xc000a91760) Reply frame received for 5\nI0511 19:54:51.599324 2879 log.go:172] (0xc000a91760) Data frame received for 3\nI0511 19:54:51.599381 2879 log.go:172] (0xc000362320) (3) Data frame handling\nI0511 19:54:51.599398 2879 log.go:172] (0xc000362320) (3) Data frame sent\nI0511 19:54:51.599408 2879 log.go:172] (0xc000a91760) Data frame received for 3\nI0511 19:54:51.599415 2879 log.go:172] (0xc000362320) (3) Data frame handling\nI0511 19:54:51.599466 2879 log.go:172] (0xc000a91760) Data frame received for 5\nI0511 19:54:51.599491 2879 log.go:172] (0xc0005021e0) (5) Data frame handling\nI0511 19:54:51.599518 2879 log.go:172] (0xc0005021e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 19:54:51.599536 2879 log.go:172] (0xc000a91760) Data frame received for 5\nI0511 19:54:51.599555 2879 log.go:172] (0xc0005021e0) (5) Data frame handling\nI0511 19:54:51.600762 2879 log.go:172] (0xc000a91760) Data frame received for 1\nI0511 19:54:51.600796 2879 log.go:172] (0xc000b88320) (1) Data frame handling\nI0511 19:54:51.600815 2879 log.go:172] (0xc000b88320) (1) Data frame sent\nI0511 19:54:51.600833 2879 log.go:172] (0xc000a91760) (0xc000b88320) Stream removed, broadcasting: 1\nI0511 19:54:51.600864 2879 log.go:172] (0xc000a91760) Go away received\nI0511 19:54:51.601366 2879 log.go:172] (0xc000a91760) (0xc000b88320) Stream removed, broadcasting: 1\nI0511 19:54:51.601482 2879 log.go:172] (0xc000a91760) (0xc000362320) Stream removed, broadcasting: 3\nI0511 19:54:51.601494 2879 log.go:172] (0xc000a91760) (0xc0005021e0) Stream removed, broadcasting: 5\n" May 11 19:54:51.607: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 19:54:51.607: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 19:54:51.611: INFO: Found 1 stateful pods, waiting for 3 May 11 19:55:01.615: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 19:55:01.615: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 19:55:01.615: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 11 19:55:01.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8762 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 19:55:01.828: INFO: stderr: "I0511 19:55:01.752869 2899 log.go:172] (0xc000aa5550) (0xc0006d0fa0) Create stream\nI0511 19:55:01.753296 2899 log.go:172] (0xc000aa5550) (0xc0006d0fa0) Stream added, broadcasting: 1\nI0511 19:55:01.756357 2899 log.go:172] (0xc000aa5550) Reply frame received for 1\nI0511 19:55:01.756392 2899 log.go:172] (0xc000aa5550) (0xc00046e500) Create stream\nI0511 19:55:01.756404 2899 log.go:172] (0xc000aa5550) (0xc00046e500) Stream added, broadcasting: 3\nI0511 19:55:01.757101 2899 log.go:172] (0xc000aa5550) Reply frame received for 3\nI0511 19:55:01.757398 2899 log.go:172] (0xc000aa5550) (0xc0006bd5e0) Create stream\nI0511 19:55:01.757412 2899 log.go:172] (0xc000aa5550) (0xc0006bd5e0) Stream added, broadcasting: 5\nI0511 19:55:01.758036 2899 log.go:172] (0xc000aa5550) Reply frame received for 5\nI0511 19:55:01.823878 2899 log.go:172] (0xc000aa5550) Data frame received for 5\nI0511 19:55:01.823902 2899 log.go:172] (0xc0006bd5e0) (5) Data frame handling\nI0511 19:55:01.823920 2899 log.go:172] (0xc0006bd5e0) (5) Data frame sent\nI0511 19:55:01.823930 2899 log.go:172] (0xc000aa5550) Data frame received for 5\nI0511 19:55:01.823938 2899 log.go:172] (0xc0006bd5e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 19:55:01.824257 2899 log.go:172] (0xc000aa5550) Data frame received for 3\nI0511 19:55:01.824271 2899 log.go:172] (0xc00046e500) (3) Data frame handling\nI0511 19:55:01.824287 2899 log.go:172] (0xc00046e500) (3) Data frame sent\nI0511 19:55:01.824296 2899 log.go:172] (0xc000aa5550) Data frame received for 3\nI0511 19:55:01.824308 2899 log.go:172] (0xc00046e500) (3) Data frame handling\nI0511 19:55:01.825618 2899 log.go:172] (0xc000aa5550) Data frame received for 1\nI0511 19:55:01.825634 2899 log.go:172] (0xc0006d0fa0) (1) Data frame handling\nI0511 19:55:01.825642 2899 log.go:172] (0xc0006d0fa0) (1) Data frame sent\nI0511 19:55:01.825656 2899 log.go:172] (0xc000aa5550) (0xc0006d0fa0) Stream removed, broadcasting: 1\nI0511 19:55:01.825675 2899 log.go:172] (0xc000aa5550) Go away received\nI0511 19:55:01.825852 2899 log.go:172] (0xc000aa5550) (0xc0006d0fa0) Stream removed, broadcasting: 1\nI0511 19:55:01.825868 2899 log.go:172] (0xc000aa5550) (0xc00046e500) Stream removed, broadcasting: 3\nI0511 19:55:01.825877 2899 log.go:172] (0xc000aa5550) (0xc0006bd5e0) Stream removed, broadcasting: 5\n" May 11 19:55:01.829: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 19:55:01.829: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 19:55:01.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8762 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 19:55:02.723: INFO: stderr: "I0511 19:55:02.202226 2919 log.go:172] (0xc0009fd1e0) (0xc00072cfa0) Create stream\nI0511 19:55:02.202291 2919 log.go:172] (0xc0009fd1e0) (0xc00072cfa0) Stream added, broadcasting: 1\nI0511 19:55:02.204575 2919 log.go:172] (0xc0009fd1e0) Reply frame received for 1\nI0511 19:55:02.204626 2919 log.go:172] (0xc0009fd1e0) (0xc000732f00) Create stream\nI0511 19:55:02.204642 2919 log.go:172] (0xc0009fd1e0) (0xc000732f00) Stream added, broadcasting: 3\nI0511 19:55:02.205485 2919 log.go:172] (0xc0009fd1e0) Reply frame received for 3\nI0511 19:55:02.205501 2919 log.go:172] (0xc0009fd1e0) (0xc00072d540) Create stream\nI0511 19:55:02.205507 2919 log.go:172] (0xc0009fd1e0) (0xc00072d540) Stream added, broadcasting: 5\nI0511 19:55:02.206211 2919 log.go:172] (0xc0009fd1e0) Reply frame received for 5\nI0511 19:55:02.274653 2919 log.go:172] (0xc0009fd1e0) Data frame received for 5\nI0511 19:55:02.274673 2919 log.go:172] (0xc00072d540) (5) Data frame handling\nI0511 19:55:02.274684 2919 log.go:172] (0xc00072d540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 19:55:02.713837 2919 log.go:172] (0xc0009fd1e0) Data frame received for 3\nI0511 19:55:02.713886 2919 log.go:172] (0xc000732f00) (3) Data frame handling\nI0511 19:55:02.713920 2919 log.go:172] (0xc000732f00) (3) Data frame sent\nI0511 19:55:02.714018 2919 log.go:172] (0xc0009fd1e0) Data frame received for 5\nI0511 19:55:02.714035 2919 log.go:172] (0xc00072d540) (5) Data frame handling\nI0511 19:55:02.714299 2919 log.go:172] (0xc0009fd1e0) Data frame received for 3\nI0511 19:55:02.714319 2919 log.go:172] (0xc000732f00) (3) Data frame handling\nI0511 19:55:02.716349 2919 log.go:172] (0xc0009fd1e0) Data frame received for 1\nI0511 19:55:02.716376 2919 log.go:172] (0xc00072cfa0) (1) Data frame handling\nI0511 19:55:02.716390 2919 log.go:172] (0xc00072cfa0) (1) Data frame sent\nI0511 19:55:02.716408 2919 log.go:172] (0xc0009fd1e0) (0xc00072cfa0) Stream removed, broadcasting: 1\nI0511 19:55:02.716571 2919 log.go:172] (0xc0009fd1e0) Go away received\nI0511 19:55:02.716881 2919 log.go:172] (0xc0009fd1e0) (0xc00072cfa0) Stream removed, broadcasting: 1\nI0511 19:55:02.716926 2919 log.go:172] (0xc0009fd1e0) (0xc000732f00) Stream removed, broadcasting: 3\nI0511 19:55:02.716943 2919 log.go:172] (0xc0009fd1e0) (0xc00072d540) Stream removed, broadcasting: 5\n" May 11 19:55:02.723: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 19:55:02.723: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 19:55:02.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8762 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 19:55:03.266: INFO: stderr: "I0511 19:55:02.991504 2939 log.go:172] (0xc0009151e0) (0xc0006d3f40) Create stream\nI0511 19:55:02.991568 2939 log.go:172] (0xc0009151e0) (0xc0006d3f40) Stream added, broadcasting: 1\nI0511 19:55:02.996708 2939 log.go:172] (0xc0009151e0) Reply frame received for 1\nI0511 19:55:02.996760 2939 log.go:172] (0xc0009151e0) (0xc00068cd20) Create stream\nI0511 19:55:02.996777 2939 log.go:172] (0xc0009151e0) (0xc00068cd20) Stream added, broadcasting: 3\nI0511 19:55:02.998132 2939 log.go:172] (0xc0009151e0) Reply frame received for 3\nI0511 19:55:02.998184 2939 log.go:172] (0xc0009151e0) (0xc0006825a0) Create stream\nI0511 19:55:02.998207 2939 log.go:172] (0xc0009151e0) (0xc0006825a0) Stream added, broadcasting: 5\nI0511 19:55:02.999176 2939 log.go:172] (0xc0009151e0) Reply frame received for 5\nI0511 19:55:03.063463 2939 log.go:172] (0xc0009151e0) Data frame received for 5\nI0511 19:55:03.063488 2939 log.go:172] (0xc0006825a0) (5) Data frame handling\nI0511 19:55:03.063509 2939 log.go:172] (0xc0006825a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 19:55:03.259629 2939 log.go:172] (0xc0009151e0) Data frame received for 3\nI0511 19:55:03.259746 2939 log.go:172] (0xc00068cd20) (3) Data frame handling\nI0511 19:55:03.259812 2939 log.go:172] (0xc00068cd20) (3) Data frame sent\nI0511 19:55:03.259886 2939 log.go:172] (0xc0009151e0) Data frame received for 5\nI0511 19:55:03.259986 2939 log.go:172] (0xc0006825a0) (5) Data frame handling\nI0511 19:55:03.260013 2939 log.go:172] (0xc0009151e0) Data frame received for 3\nI0511 19:55:03.260028 2939 log.go:172] (0xc00068cd20) (3) Data frame handling\nI0511 19:55:03.261905 2939 log.go:172] (0xc0009151e0) Data frame received for 1\nI0511 19:55:03.261926 2939 log.go:172] (0xc0006d3f40) (1) Data frame handling\nI0511 19:55:03.261945 2939 log.go:172] (0xc0006d3f40) (1) Data frame sent\nI0511 19:55:03.261958 2939 log.go:172] (0xc0009151e0) (0xc0006d3f40) Stream removed, broadcasting: 1\nI0511 19:55:03.261976 2939 log.go:172] (0xc0009151e0) Go away received\nI0511 19:55:03.262404 2939 log.go:172] (0xc0009151e0) (0xc0006d3f40) Stream removed, broadcasting: 1\nI0511 19:55:03.262424 2939 log.go:172] (0xc0009151e0) (0xc00068cd20) Stream removed, broadcasting: 3\nI0511 19:55:03.262433 2939 log.go:172] (0xc0009151e0) (0xc0006825a0) Stream removed, broadcasting: 5\n" May 11 19:55:03.267: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 19:55:03.267: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 19:55:03.267: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:55:03.433: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 11 19:55:13.441: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 19:55:13.441: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 19:55:13.441: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 19:55:13.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999599s May 11 19:55:15.111: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.940459705s May 11 19:55:16.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.335640195s May 11 19:55:17.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.329519497s May 11 19:55:18.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.325243527s May 11 19:55:19.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.319871458s May 11 19:55:20.136: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.315574503s May 11 19:55:21.364: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.31110874s May 11 19:55:22.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.083073976s May 11 19:55:23.386: INFO: Verifying statefulset ss doesn't scale past 3 for another 78.865684ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8762 May 11 19:55:24.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8762 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:55:24.922: INFO: stderr: "I0511 19:55:24.734353 2957 log.go:172] (0xc00003af20) (0xc0001395e0) Create stream\nI0511 19:55:24.734427 2957 log.go:172] (0xc00003af20) (0xc0001395e0) Stream added, broadcasting: 1\nI0511 19:55:24.736561 2957 log.go:172] (0xc00003af20) Reply frame received for 1\nI0511 19:55:24.736592 2957 log.go:172] (0xc00003af20) (0xc0000dd0e0) Create stream\nI0511 19:55:24.736601 2957 log.go:172] (0xc00003af20) (0xc0000dd0e0) Stream added, broadcasting: 3\nI0511 19:55:24.737659 2957 log.go:172] (0xc00003af20) Reply frame received for 3\nI0511 19:55:24.737686 2957 log.go:172] (0xc00003af20) (0xc0006ba5a0) Create stream\nI0511 19:55:24.737699 2957 log.go:172] (0xc00003af20) (0xc0006ba5a0) Stream added, broadcasting: 5\nI0511 19:55:24.738622 2957 log.go:172] (0xc00003af20) Reply frame received for 5\nI0511 19:55:24.916712 2957 log.go:172] (0xc00003af20) Data frame received for 5\nI0511 19:55:24.916733 2957 log.go:172] (0xc0006ba5a0) (5) Data frame handling\nI0511 19:55:24.916741 2957 log.go:172] (0xc0006ba5a0) (5) Data frame sent\nI0511 19:55:24.916746 2957 log.go:172] (0xc00003af20) Data frame received for 5\nI0511 19:55:24.916750 2957 log.go:172] (0xc0006ba5a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 19:55:24.916778 2957 log.go:172] (0xc00003af20) Data frame received for 3\nI0511 19:55:24.916797 2957 log.go:172] (0xc0000dd0e0) (3) Data frame handling\nI0511 19:55:24.916810 2957 log.go:172] (0xc0000dd0e0) (3) Data frame sent\nI0511 19:55:24.916817 2957 log.go:172] (0xc00003af20) Data frame received for 3\nI0511 19:55:24.916823 2957 log.go:172] (0xc0000dd0e0) (3) Data frame handling\nI0511 19:55:24.917903 2957 log.go:172] (0xc00003af20) Data frame received for 1\nI0511 19:55:24.917918 2957 log.go:172] (0xc0001395e0) (1) Data frame handling\nI0511 19:55:24.917925 2957 log.go:172] (0xc0001395e0) (1) Data frame sent\nI0511 19:55:24.917933 2957 log.go:172] (0xc00003af20) (0xc0001395e0) Stream removed, broadcasting: 1\nI0511 19:55:24.917988 2957 log.go:172] (0xc00003af20) Go away received\nI0511 19:55:24.918146 2957 log.go:172] (0xc00003af20) (0xc0001395e0) Stream removed, broadcasting: 1\nI0511 19:55:24.918157 2957 log.go:172] (0xc00003af20) (0xc0000dd0e0) Stream removed, broadcasting: 3\nI0511 19:55:24.918164 2957 log.go:172] (0xc00003af20) (0xc0006ba5a0) Stream removed, broadcasting: 5\n" May 11 19:55:24.922: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 19:55:24.922: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 19:55:24.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8762 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:55:25.180: INFO: stderr: "I0511 19:55:25.115403 2977 log.go:172] (0xc000b33130) (0xc000a9c3c0) Create stream\nI0511 19:55:25.115478 2977 log.go:172] (0xc000b33130) (0xc000a9c3c0) Stream added, broadcasting: 1\nI0511 19:55:25.125431 2977 log.go:172] (0xc000b33130) Reply frame received for 1\nI0511 19:55:25.125555 2977 log.go:172] (0xc000b33130) (0xc00051cc80) Create stream\nI0511 19:55:25.125601 2977 log.go:172] (0xc000b33130) (0xc00051cc80) Stream added, broadcasting: 3\nI0511 19:55:25.127662 2977 log.go:172] (0xc000b33130) Reply frame received for 3\nI0511 19:55:25.127695 2977 log.go:172] (0xc000b33130) (0xc0004fc3c0) Create stream\nI0511 19:55:25.127703 2977 log.go:172] (0xc000b33130) (0xc0004fc3c0) Stream added, broadcasting: 5\nI0511 19:55:25.128297 2977 log.go:172] (0xc000b33130) Reply frame received for 5\nI0511 19:55:25.175477 2977 log.go:172] (0xc000b33130) Data frame received for 5\nI0511 19:55:25.175497 2977 log.go:172] (0xc0004fc3c0) (5) Data frame handling\nI0511 19:55:25.175505 2977 log.go:172] (0xc0004fc3c0) (5) Data frame sent\nI0511 19:55:25.175511 2977 log.go:172] (0xc000b33130) Data frame received for 5\nI0511 19:55:25.175515 2977 log.go:172] (0xc0004fc3c0) (5) Data frame handling\nI0511 19:55:25.175527 2977 log.go:172] (0xc000b33130) Data frame received for 3\nI0511 19:55:25.175534 2977 log.go:172] (0xc00051cc80) (3) Data frame handling\nI0511 19:55:25.175540 2977 log.go:172] (0xc00051cc80) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 19:55:25.175615 2977 log.go:172] (0xc000b33130) Data frame received for 3\nI0511 19:55:25.175636 2977 log.go:172] (0xc00051cc80) (3) Data frame handling\nI0511 19:55:25.176900 2977 log.go:172] (0xc000b33130) Data frame received for 1\nI0511 19:55:25.176911 2977 log.go:172] (0xc000a9c3c0) (1) Data frame handling\nI0511 19:55:25.176922 2977 log.go:172] (0xc000a9c3c0) (1) Data frame sent\nI0511 19:55:25.176930 2977 log.go:172] (0xc000b33130) (0xc000a9c3c0) Stream removed, broadcasting: 1\nI0511 19:55:25.176975 2977 log.go:172] (0xc000b33130) Go away received\nI0511 19:55:25.177260 2977 log.go:172] (0xc000b33130) (0xc000a9c3c0) Stream removed, broadcasting: 1\nI0511 19:55:25.177272 2977 log.go:172] (0xc000b33130) (0xc00051cc80) Stream removed, broadcasting: 3\nI0511 19:55:25.177278 2977 log.go:172] (0xc000b33130) (0xc0004fc3c0) Stream removed, broadcasting: 5\n" May 11 19:55:25.180: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 19:55:25.180: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 19:55:25.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8762 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 19:55:25.401: INFO: stderr: "I0511 19:55:25.339485 2997 log.go:172] (0xc00061e370) (0xc0004a8e60) Create stream\nI0511 19:55:25.339529 2997 log.go:172] (0xc00061e370) (0xc0004a8e60) Stream added, broadcasting: 1\nI0511 19:55:25.341069 2997 log.go:172] (0xc00061e370) Reply frame received for 1\nI0511 19:55:25.341286 2997 log.go:172] (0xc00061e370) (0xc0000dcdc0) Create stream\nI0511 19:55:25.341314 2997 log.go:172] (0xc00061e370) (0xc0000dcdc0) Stream added, broadcasting: 3\nI0511 19:55:25.342243 2997 log.go:172] (0xc00061e370) Reply frame received for 3\nI0511 19:55:25.342274 2997 log.go:172] (0xc00061e370) (0xc000304140) Create stream\nI0511 19:55:25.342284 2997 log.go:172] (0xc00061e370) (0xc000304140) Stream added, broadcasting: 5\nI0511 19:55:25.343360 2997 log.go:172] (0xc00061e370) Reply frame received for 5\nI0511 19:55:25.395331 2997 log.go:172] (0xc00061e370) Data frame received for 3\nI0511 19:55:25.395353 2997 log.go:172] (0xc0000dcdc0) (3) Data frame handling\nI0511 19:55:25.395366 2997 log.go:172] (0xc0000dcdc0) (3) Data frame sent\nI0511 19:55:25.395373 2997 log.go:172] (0xc00061e370) Data frame received for 3\nI0511 19:55:25.395377 2997 log.go:172] (0xc0000dcdc0) (3) Data frame handling\nI0511 19:55:25.395440 2997 log.go:172] (0xc00061e370) Data frame received for 5\nI0511 19:55:25.395450 2997 log.go:172] (0xc000304140) (5) Data frame handling\nI0511 19:55:25.395458 2997 log.go:172] (0xc000304140) (5) Data frame sent\nI0511 19:55:25.395468 2997 log.go:172] (0xc00061e370) Data frame received for 5\nI0511 19:55:25.395474 2997 log.go:172] (0xc000304140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 19:55:25.396830 2997 log.go:172] (0xc00061e370) Data frame received for 1\nI0511 19:55:25.396848 2997 log.go:172] (0xc0004a8e60) (1) Data frame handling\nI0511 19:55:25.396858 2997 log.go:172] (0xc0004a8e60) (1) Data frame sent\nI0511 19:55:25.396869 2997 log.go:172] (0xc00061e370) (0xc0004a8e60) Stream removed, broadcasting: 1\nI0511 19:55:25.397104 2997 log.go:172] (0xc00061e370) Go away received\nI0511 19:55:25.397264 2997 log.go:172] (0xc00061e370) (0xc0004a8e60) Stream removed, broadcasting: 1\nI0511 19:55:25.397292 2997 log.go:172] (0xc00061e370) (0xc0000dcdc0) Stream removed, broadcasting: 3\nI0511 19:55:25.397305 2997 log.go:172] (0xc00061e370) (0xc000304140) Stream removed, broadcasting: 5\n" May 11 19:55:25.401: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 19:55:25.401: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 19:55:25.401: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 19:55:55.479: INFO: Deleting all statefulset in ns statefulset-8762 May 11 19:55:55.481: INFO: Scaling statefulset ss to 0 May 11 19:55:55.489: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:55:55.491: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:55:55.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8762" for this suite. • [SLOW TEST:103.085 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":183,"skipped":2778,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:55:55.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:55:55.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c" in namespace "downward-api-1942" to be "Succeeded or Failed" May 11 19:55:55.614: INFO: Pod "downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.576697ms May 11 19:55:57.974: INFO: Pod "downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363522774s May 11 19:55:59.977: INFO: Pod "downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366598976s May 11 19:56:01.980: INFO: Pod "downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.369440659s STEP: Saw pod success May 11 19:56:01.980: INFO: Pod "downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c" satisfied condition "Succeeded or Failed" May 11 19:56:01.982: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c container client-container: STEP: delete the pod May 11 19:56:02.179: INFO: Waiting for pod downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c to disappear May 11 19:56:02.222: INFO: Pod downwardapi-volume-7a1ae08d-d6ed-4215-9db2-b1f695ddb33c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:56:02.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1942" for this suite. • [SLOW TEST:6.871 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":184,"skipped":2796,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:56:02.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:56:03.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8976' May 11 19:56:03.487: INFO: stderr: "" May 11 19:56:03.487: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 11 19:56:03.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8976' May 11 19:56:03.848: INFO: stderr: "" May 11 19:56:03.848: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 19:56:04.851: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:56:04.851: INFO: Found 0 / 1 May 11 19:56:06.059: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:56:06.059: INFO: Found 0 / 1 May 11 19:56:06.920: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:56:06.920: INFO: Found 0 / 1 May 11 19:56:07.852: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:56:07.852: INFO: Found 1 / 1 May 11 19:56:07.852: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 19:56:07.854: INFO: Selector matched 1 pods for map[app:agnhost] May 11 19:56:07.854: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 19:56:07.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-svzz6 --namespace=kubectl-8976' May 11 19:56:07.963: INFO: stderr: "" May 11 19:56:07.963: INFO: stdout: "Name: agnhost-master-svzz6\nNamespace: kubectl-8976\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Mon, 11 May 2020 19:56:03 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.44\nIPs:\n IP: 10.244.2.44\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://9e3aad7039e7c34193a38bb3a33dfe45aad3d700720cc9ed0081d8c553462a76\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 May 2020 19:56:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-5tvks (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-5tvks:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-5tvks\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-8976/agnhost-master-svzz6 to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" May 11 19:56:07.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8976' May 11 19:56:08.099: INFO: stderr: "" May 11 19:56:08.099: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8976\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-svzz6\n" May 11 19:56:08.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8976' May 11 19:56:08.215: INFO: stderr: "" May 11 19:56:08.215: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8976\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.100.72.192\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.44:6379\nSession Affinity: None\nEvents: \n" May 11 19:56:08.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 11 19:56:08.351: INFO: stderr: "" May 11 19:56:08.351: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 11 May 2020 19:56:01 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 May 2020 19:53:26 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 May 2020 19:53:26 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 May 2020 19:53:26 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 May 2020 19:53:26 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 12d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 11 19:56:08.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-8976' May 11 19:56:08.484: INFO: stderr: "" May 11 19:56:08.484: INFO: stdout: "Name: kubectl-8976\nLabels: e2e-framework=kubectl\n e2e-run=0975a546-d021-477c-a431-9f79f69be5de\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:56:08.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8976" for this suite. • [SLOW TEST:6.104 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":185,"skipped":2797,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:56:08.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:56:09.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2450" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":186,"skipped":2808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:56:09.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 19:56:14.783: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2a358b0a-8d24-4d43-8637-d8fd968a452a" May 11 19:56:14.783: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2a358b0a-8d24-4d43-8637-d8fd968a452a" in namespace "pods-2908" to be "terminated due to deadline exceeded" May 11 19:56:14.816: INFO: Pod "pod-update-activedeadlineseconds-2a358b0a-8d24-4d43-8637-d8fd968a452a": Phase="Running", Reason="", readiness=true. Elapsed: 32.742622ms May 11 19:56:16.968: INFO: Pod "pod-update-activedeadlineseconds-2a358b0a-8d24-4d43-8637-d8fd968a452a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.184954748s May 11 19:56:16.968: INFO: Pod "pod-update-activedeadlineseconds-2a358b0a-8d24-4d43-8637-d8fd968a452a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:56:16.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2908" for this suite. • [SLOW TEST:7.631 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":187,"skipped":2836,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:56:16.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 19:56:18.882: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 19:56:21.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823778, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823778, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823778, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823778, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 19:56:23.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823778, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823778, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823778, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724823778, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 19:56:26.496: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:56:26.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3214-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:56:27.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9155" for this suite. STEP: Destroying namespace "webhook-9155-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.816 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":188,"skipped":2836,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:56:27.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 11 19:56:30.303: INFO: Pod name wrapped-volume-race-8abbd2a3-d107-47de-9992-05ae812e7f1b: Found 0 pods out of 5 May 11 19:56:35.311: INFO: Pod name wrapped-volume-race-8abbd2a3-d107-47de-9992-05ae812e7f1b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8abbd2a3-d107-47de-9992-05ae812e7f1b in namespace emptydir-wrapper-565, will wait for the garbage collector to delete the pods May 11 19:56:50.490: INFO: Deleting ReplicationController wrapped-volume-race-8abbd2a3-d107-47de-9992-05ae812e7f1b took: 230.244136ms May 11 19:56:50.790: INFO: Terminating ReplicationController wrapped-volume-race-8abbd2a3-d107-47de-9992-05ae812e7f1b pods took: 300.284886ms STEP: Creating RC which spawns configmap-volume pods May 11 19:57:05.754: INFO: Pod name wrapped-volume-race-ed448770-bb35-4fc7-87f1-b1bd679cf70e: Found 0 pods out of 5 May 11 19:57:10.762: INFO: Pod name wrapped-volume-race-ed448770-bb35-4fc7-87f1-b1bd679cf70e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ed448770-bb35-4fc7-87f1-b1bd679cf70e in namespace emptydir-wrapper-565, will wait for the garbage collector to delete the pods May 11 19:57:28.970: INFO: Deleting ReplicationController wrapped-volume-race-ed448770-bb35-4fc7-87f1-b1bd679cf70e took: 5.973746ms May 11 19:57:29.370: INFO: Terminating ReplicationController wrapped-volume-race-ed448770-bb35-4fc7-87f1-b1bd679cf70e pods took: 400.210885ms STEP: Creating RC which spawns configmap-volume pods May 11 19:57:45.806: INFO: Pod name wrapped-volume-race-6d733d2f-c531-440a-a137-5d8a7f8aa00f: Found 0 pods out of 5 May 11 19:57:50.811: INFO: Pod name wrapped-volume-race-6d733d2f-c531-440a-a137-5d8a7f8aa00f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6d733d2f-c531-440a-a137-5d8a7f8aa00f in namespace emptydir-wrapper-565, will wait for the garbage collector to delete the pods May 11 19:58:04.934: INFO: Deleting ReplicationController wrapped-volume-race-6d733d2f-c531-440a-a137-5d8a7f8aa00f took: 6.895781ms May 11 19:58:05.235: INFO: Terminating ReplicationController wrapped-volume-race-6d733d2f-c531-440a-a137-5d8a7f8aa00f pods took: 300.24593ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:58:16.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-565" for this suite. • [SLOW TEST:109.230 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":189,"skipped":2853,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:58:17.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:58:29.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-480" for this suite. • [SLOW TEST:12.051 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":190,"skipped":2861,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:58:29.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 11 19:58:29.433: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 11 19:58:40.123: INFO: >>> kubeConfig: /root/.kube/config May 11 19:58:43.063: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:58:53.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8213" for this suite. • [SLOW TEST:24.802 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":191,"skipped":2861,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:58:53.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 19:58:54.060: INFO: Waiting up to 5m0s for pod "pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11" in namespace "emptydir-2079" to be "Succeeded or Failed" May 11 19:58:54.192: INFO: Pod "pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11": Phase="Pending", Reason="", readiness=false. Elapsed: 132.458348ms May 11 19:58:56.215: INFO: Pod "pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155382928s May 11 19:58:58.263: INFO: Pod "pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11": Phase="Running", Reason="", readiness=true. Elapsed: 4.203114787s May 11 19:59:00.266: INFO: Pod "pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206627373s STEP: Saw pod success May 11 19:59:00.266: INFO: Pod "pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11" satisfied condition "Succeeded or Failed" May 11 19:59:00.269: INFO: Trying to get logs from node latest-worker2 pod pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11 container test-container: STEP: delete the pod May 11 19:59:00.317: INFO: Waiting for pod pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11 to disappear May 11 19:59:00.353: INFO: Pod pod-6b27abe3-ed2d-4111-b9a0-b91b8f58ec11 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:00.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2079" for this suite. • [SLOW TEST:6.482 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":192,"skipped":2872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:00.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 19:59:00.425: INFO: Waiting up to 5m0s for pod "pod-5044a06b-e8bb-4c56-86e3-6f4144508217" in namespace "emptydir-7162" to be "Succeeded or Failed" May 11 19:59:00.429: INFO: Pod "pod-5044a06b-e8bb-4c56-86e3-6f4144508217": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390078ms May 11 19:59:02.433: INFO: Pod "pod-5044a06b-e8bb-4c56-86e3-6f4144508217": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007951804s May 11 19:59:04.460: INFO: Pod "pod-5044a06b-e8bb-4c56-86e3-6f4144508217": Phase="Running", Reason="", readiness=true. Elapsed: 4.035096793s May 11 19:59:06.478: INFO: Pod "pod-5044a06b-e8bb-4c56-86e3-6f4144508217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053404406s STEP: Saw pod success May 11 19:59:06.478: INFO: Pod "pod-5044a06b-e8bb-4c56-86e3-6f4144508217" satisfied condition "Succeeded or Failed" May 11 19:59:06.484: INFO: Trying to get logs from node latest-worker2 pod pod-5044a06b-e8bb-4c56-86e3-6f4144508217 container test-container: STEP: delete the pod May 11 19:59:06.677: INFO: Waiting for pod pod-5044a06b-e8bb-4c56-86e3-6f4144508217 to disappear May 11 19:59:06.734: INFO: Pod pod-5044a06b-e8bb-4c56-86e3-6f4144508217 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:06.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7162" for this suite. • [SLOW TEST:6.380 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":2902,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:06.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:14.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3002" for this suite. • [SLOW TEST:7.397 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":194,"skipped":2916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:14.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0511 19:59:15.025693 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 19:59:15.025: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:15.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7092" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":195,"skipped":2962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:15.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 11 19:59:15.109: INFO: Waiting up to 5m0s for pod "downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7" in namespace "downward-api-2967" to be "Succeeded or Failed" May 11 19:59:15.114: INFO: Pod "downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314364ms May 11 19:59:17.425: INFO: Pod "downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315734879s May 11 19:59:19.556: INFO: Pod "downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446789381s May 11 19:59:21.724: INFO: Pod "downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.614801846s STEP: Saw pod success May 11 19:59:21.724: INFO: Pod "downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7" satisfied condition "Succeeded or Failed" May 11 19:59:21.726: INFO: Trying to get logs from node latest-worker pod downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7 container dapi-container: STEP: delete the pod May 11 19:59:21.801: INFO: Waiting for pod downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7 to disappear May 11 19:59:22.059: INFO: Pod downward-api-1a9c7436-1b27-4fd4-aee4-252d985457f7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:22.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2967" for this suite. • [SLOW TEST:7.195 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":2998,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:22.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 11 19:59:22.969: INFO: Waiting up to 5m0s for pod "pod-013da76b-9c73-4b94-8888-84e533599460" in namespace "emptydir-1715" to be "Succeeded or Failed" May 11 19:59:23.019: INFO: Pod "pod-013da76b-9c73-4b94-8888-84e533599460": Phase="Pending", Reason="", readiness=false. Elapsed: 49.57284ms May 11 19:59:25.022: INFO: Pod "pod-013da76b-9c73-4b94-8888-84e533599460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053151855s May 11 19:59:27.035: INFO: Pod "pod-013da76b-9c73-4b94-8888-84e533599460": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066224195s STEP: Saw pod success May 11 19:59:27.035: INFO: Pod "pod-013da76b-9c73-4b94-8888-84e533599460" satisfied condition "Succeeded or Failed" May 11 19:59:27.039: INFO: Trying to get logs from node latest-worker2 pod pod-013da76b-9c73-4b94-8888-84e533599460 container test-container: STEP: delete the pod May 11 19:59:27.089: INFO: Waiting for pod pod-013da76b-9c73-4b94-8888-84e533599460 to disappear May 11 19:59:27.102: INFO: Pod pod-013da76b-9c73-4b94-8888-84e533599460 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:27.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1715" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3020,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:27.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 11 19:59:27.572: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:43.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2692" for this suite. • [SLOW TEST:16.554 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":198,"skipped":3030,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:43.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:59:43.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe" in namespace "downward-api-4796" to be "Succeeded or Failed" May 11 19:59:43.783: INFO: Pod "downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe": Phase="Pending", Reason="", readiness=false. Elapsed: 27.571778ms May 11 19:59:45.852: INFO: Pod "downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096415868s May 11 19:59:47.963: INFO: Pod "downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207935148s May 11 19:59:49.967: INFO: Pod "downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.211128289s STEP: Saw pod success May 11 19:59:49.967: INFO: Pod "downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe" satisfied condition "Succeeded or Failed" May 11 19:59:49.969: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe container client-container: STEP: delete the pod May 11 19:59:50.041: INFO: Waiting for pod downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe to disappear May 11 19:59:50.071: INFO: Pod downwardapi-volume-09d5fa6e-e725-498d-ae5c-b61042f97afe no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:50.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4796" for this suite. • [SLOW TEST:6.416 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":199,"skipped":3035,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:50.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 19:59:50.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c94b981-e511-4b87-87b1-00f37937ec8b" in namespace "downward-api-4211" to be "Succeeded or Failed" May 11 19:59:50.144: INFO: Pod "downwardapi-volume-2c94b981-e511-4b87-87b1-00f37937ec8b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.57047ms May 11 19:59:52.174: INFO: Pod "downwardapi-volume-2c94b981-e511-4b87-87b1-00f37937ec8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040104477s May 11 19:59:54.180: INFO: Pod "downwardapi-volume-2c94b981-e511-4b87-87b1-00f37937ec8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04624452s STEP: Saw pod success May 11 19:59:54.180: INFO: Pod "downwardapi-volume-2c94b981-e511-4b87-87b1-00f37937ec8b" satisfied condition "Succeeded or Failed" May 11 19:59:54.182: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2c94b981-e511-4b87-87b1-00f37937ec8b container client-container: STEP: delete the pod May 11 19:59:54.408: INFO: Waiting for pod downwardapi-volume-2c94b981-e511-4b87-87b1-00f37937ec8b to disappear May 11 19:59:54.508: INFO: Pod downwardapi-volume-2c94b981-e511-4b87-87b1-00f37937ec8b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 19:59:54.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4211" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3050,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 19:59:54.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 19:59:54.577: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 11 19:59:57.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2742 create -f -' May 11 19:59:58.091: INFO: stderr: "" May 11 19:59:58.091: INFO: stdout: "e2e-test-crd-publish-openapi-6133-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 11 19:59:58.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2742 delete e2e-test-crd-publish-openapi-6133-crds test-foo' May 11 19:59:58.213: INFO: stderr: "" May 11 19:59:58.213: INFO: stdout: "e2e-test-crd-publish-openapi-6133-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 11 19:59:58.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2742 apply -f -' May 11 19:59:58.546: INFO: stderr: "" May 11 19:59:58.546: INFO: stdout: "e2e-test-crd-publish-openapi-6133-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 11 19:59:58.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2742 delete e2e-test-crd-publish-openapi-6133-crds test-foo' May 11 19:59:58.637: INFO: stderr: "" May 11 19:59:58.637: INFO: stdout: "e2e-test-crd-publish-openapi-6133-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 11 19:59:58.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2742 create -f -' May 11 19:59:58.890: INFO: rc: 1 May 11 19:59:58.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2742 apply -f -' May 11 19:59:59.221: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 11 19:59:59.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2742 create -f -' May 11 19:59:59.492: INFO: rc: 1 May 11 19:59:59.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2742 apply -f -' May 11 19:59:59.782: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 11 19:59:59.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6133-crds' May 11 20:00:00.072: INFO: stderr: "" May 11 20:00:00.072: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6133-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 11 20:00:00.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6133-crds.metadata' May 11 20:00:00.408: INFO: stderr: "" May 11 20:00:00.408: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6133-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 11 20:00:00.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6133-crds.spec' May 11 20:00:00.699: INFO: stderr: "" May 11 20:00:00.699: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6133-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 11 20:00:00.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6133-crds.spec.bars' May 11 20:00:00.991: INFO: stderr: "" May 11 20:00:00.991: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6133-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 11 20:00:00.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6133-crds.spec.bars2' May 11 20:00:01.221: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:00:04.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2742" for this suite. • [SLOW TEST:9.676 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":201,"skipped":3072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:00:04.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:01:04.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3325" for this suite. • [SLOW TEST:60.336 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:01:04.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 11 20:01:04.691: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 11 20:01:04.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-428' May 11 20:01:05.134: INFO: stderr: "" May 11 20:01:05.134: INFO: stdout: "service/agnhost-slave created\n" May 11 20:01:05.136: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 11 20:01:05.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-428' May 11 20:01:05.551: INFO: stderr: "" May 11 20:01:05.551: INFO: stdout: "service/agnhost-master created\n" May 11 20:01:05.551: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 11 20:01:05.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-428' May 11 20:01:05.854: INFO: stderr: "" May 11 20:01:05.854: INFO: stdout: "service/frontend created\n" May 11 20:01:05.854: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 11 20:01:05.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-428' May 11 20:01:06.128: INFO: stderr: "" May 11 20:01:06.128: INFO: stdout: "deployment.apps/frontend created\n" May 11 20:01:06.129: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 20:01:06.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-428' May 11 20:01:06.476: INFO: stderr: "" May 11 20:01:06.476: INFO: stdout: "deployment.apps/agnhost-master created\n" May 11 20:01:06.476: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 20:01:06.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-428' May 11 20:01:06.883: INFO: stderr: "" May 11 20:01:06.883: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 11 20:01:06.883: INFO: Waiting for all frontend pods to be Running. May 11 20:01:16.933: INFO: Waiting for frontend to serve content. May 11 20:01:17.136: INFO: Trying to add a new entry to the guestbook. May 11 20:01:17.185: INFO: Verifying that added entry can be retrieved. May 11 20:01:17.194: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources May 11 20:01:22.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-428' May 11 20:01:22.514: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 20:01:22.514: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 11 20:01:22.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-428' May 11 20:01:22.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 20:01:22.727: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 11 20:01:22.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-428' May 11 20:01:22.871: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 20:01:22.871: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 20:01:22.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-428' May 11 20:01:22.988: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 20:01:22.988: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 20:01:22.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-428' May 11 20:01:23.106: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 20:01:23.106: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 11 20:01:23.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-428' May 11 20:01:23.643: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 20:01:23.643: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:01:23.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-428" for this suite. • [SLOW TEST:19.696 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":203,"skipped":3166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:01:24.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2956 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 20:01:25.093: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 11 20:01:26.030: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 20:01:28.088: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 20:01:30.103: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 20:01:32.033: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:01:34.034: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:01:36.034: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:01:38.034: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:01:40.035: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:01:42.033: INFO: The status of Pod netserver-0 is Running (Ready = true) May 11 20:01:42.038: INFO: The status of Pod netserver-1 is Running (Ready = false) May 11 20:01:44.043: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 11 20:01:48.192: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.70:8080/dial?request=hostname&protocol=udp&host=10.244.1.197&port=8081&tries=1'] Namespace:pod-network-test-2956 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:01:48.192: INFO: >>> kubeConfig: /root/.kube/config I0511 20:01:48.433535 7 log.go:172] (0xc00182e420) (0xc001229ae0) Create stream I0511 20:01:48.433554 7 log.go:172] (0xc00182e420) (0xc001229ae0) Stream added, broadcasting: 1 I0511 20:01:48.434777 7 log.go:172] (0xc00182e420) Reply frame received for 1 I0511 20:01:48.434803 7 log.go:172] (0xc00182e420) (0xc0017a4000) Create stream I0511 20:01:48.434813 7 log.go:172] (0xc00182e420) (0xc0017a4000) Stream added, broadcasting: 3 I0511 20:01:48.435560 7 log.go:172] (0xc00182e420) Reply frame received for 3 I0511 20:01:48.435580 7 log.go:172] (0xc00182e420) (0xc000d45ea0) Create stream I0511 20:01:48.435585 7 log.go:172] (0xc00182e420) (0xc000d45ea0) Stream added, broadcasting: 5 I0511 20:01:48.436320 7 log.go:172] (0xc00182e420) Reply frame received for 5 I0511 20:01:48.502347 7 log.go:172] (0xc00182e420) Data frame received for 3 I0511 20:01:48.502371 7 log.go:172] (0xc0017a4000) (3) Data frame handling I0511 20:01:48.502386 7 log.go:172] (0xc0017a4000) (3) Data frame sent I0511 20:01:48.502982 7 log.go:172] (0xc00182e420) Data frame received for 3 I0511 20:01:48.503013 7 log.go:172] (0xc0017a4000) (3) Data frame handling I0511 20:01:48.503259 7 log.go:172] (0xc00182e420) Data frame received for 5 I0511 20:01:48.503290 7 log.go:172] (0xc000d45ea0) (5) Data frame handling I0511 20:01:48.504751 7 log.go:172] (0xc00182e420) Data frame received for 1 I0511 20:01:48.504777 7 log.go:172] (0xc001229ae0) (1) Data frame handling I0511 20:01:48.504797 7 log.go:172] (0xc001229ae0) (1) Data frame sent I0511 20:01:48.504827 7 log.go:172] (0xc00182e420) (0xc001229ae0) Stream removed, broadcasting: 1 I0511 20:01:48.504846 7 log.go:172] (0xc00182e420) Go away received I0511 20:01:48.504942 7 log.go:172] (0xc00182e420) (0xc001229ae0) Stream removed, broadcasting: 1 I0511 20:01:48.504958 7 log.go:172] (0xc00182e420) (0xc0017a4000) Stream removed, broadcasting: 3 I0511 20:01:48.504967 7 log.go:172] (0xc00182e420) (0xc000d45ea0) Stream removed, broadcasting: 5 May 11 20:01:48.504: INFO: Waiting for responses: map[] May 11 20:01:48.575: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.70:8080/dial?request=hostname&protocol=udp&host=10.244.2.69&port=8081&tries=1'] Namespace:pod-network-test-2956 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:01:48.575: INFO: >>> kubeConfig: /root/.kube/config I0511 20:01:48.608108 7 log.go:172] (0xc0028104d0) (0xc00148a640) Create stream I0511 20:01:48.608148 7 log.go:172] (0xc0028104d0) (0xc00148a640) Stream added, broadcasting: 1 I0511 20:01:48.609765 7 log.go:172] (0xc0028104d0) Reply frame received for 1 I0511 20:01:48.609813 7 log.go:172] (0xc0028104d0) (0xc002dcf0e0) Create stream I0511 20:01:48.609828 7 log.go:172] (0xc0028104d0) (0xc002dcf0e0) Stream added, broadcasting: 3 I0511 20:01:48.610767 7 log.go:172] (0xc0028104d0) Reply frame received for 3 I0511 20:01:48.610818 7 log.go:172] (0xc0028104d0) (0xc002dcf180) Create stream I0511 20:01:48.610836 7 log.go:172] (0xc0028104d0) (0xc002dcf180) Stream added, broadcasting: 5 I0511 20:01:48.611800 7 log.go:172] (0xc0028104d0) Reply frame received for 5 I0511 20:01:48.673638 7 log.go:172] (0xc0028104d0) Data frame received for 3 I0511 20:01:48.673661 7 log.go:172] (0xc002dcf0e0) (3) Data frame handling I0511 20:01:48.673674 7 log.go:172] (0xc002dcf0e0) (3) Data frame sent I0511 20:01:48.674010 7 log.go:172] (0xc0028104d0) Data frame received for 3 I0511 20:01:48.674028 7 log.go:172] (0xc002dcf0e0) (3) Data frame handling I0511 20:01:48.674042 7 log.go:172] (0xc0028104d0) Data frame received for 5 I0511 20:01:48.674054 7 log.go:172] (0xc002dcf180) (5) Data frame handling I0511 20:01:48.675401 7 log.go:172] (0xc0028104d0) Data frame received for 1 I0511 20:01:48.675415 7 log.go:172] (0xc00148a640) (1) Data frame handling I0511 20:01:48.675429 7 log.go:172] (0xc00148a640) (1) Data frame sent I0511 20:01:48.675488 7 log.go:172] (0xc0028104d0) (0xc00148a640) Stream removed, broadcasting: 1 I0511 20:01:48.675564 7 log.go:172] (0xc0028104d0) (0xc00148a640) Stream removed, broadcasting: 1 I0511 20:01:48.675580 7 log.go:172] (0xc0028104d0) (0xc002dcf0e0) Stream removed, broadcasting: 3 I0511 20:01:48.675750 7 log.go:172] (0xc0028104d0) (0xc002dcf180) Stream removed, broadcasting: 5 May 11 20:01:48.675: INFO: Waiting for responses: map[] I0511 20:01:48.675866 7 log.go:172] (0xc0028104d0) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:01:48.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2956" for this suite. • [SLOW TEST:24.659 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3208,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:01:48.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:02:05.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4856" for this suite. • [SLOW TEST:16.872 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":205,"skipped":3222,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:02:05.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 11 20:02:05.861: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:02:13.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6295" for this suite. • [SLOW TEST:7.896 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":206,"skipped":3238,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:02:13.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-bdff2b05-deaf-44fd-a7ae-01499b32c8c5 STEP: Creating a pod to test consume secrets May 11 20:02:14.064: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ce6e1c9-7465-4edd-840c-6088cc976a01" in namespace "projected-9905" to be "Succeeded or Failed" May 11 20:02:14.080: INFO: Pod "pod-projected-secrets-4ce6e1c9-7465-4edd-840c-6088cc976a01": Phase="Pending", Reason="", readiness=false. Elapsed: 15.903771ms May 11 20:02:16.082: INFO: Pod "pod-projected-secrets-4ce6e1c9-7465-4edd-840c-6088cc976a01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018477722s May 11 20:02:18.086: INFO: Pod "pod-projected-secrets-4ce6e1c9-7465-4edd-840c-6088cc976a01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022317709s STEP: Saw pod success May 11 20:02:18.086: INFO: Pod "pod-projected-secrets-4ce6e1c9-7465-4edd-840c-6088cc976a01" satisfied condition "Succeeded or Failed" May 11 20:02:18.088: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-4ce6e1c9-7465-4edd-840c-6088cc976a01 container projected-secret-volume-test: STEP: delete the pod May 11 20:02:18.194: INFO: Waiting for pod pod-projected-secrets-4ce6e1c9-7465-4edd-840c-6088cc976a01 to disappear May 11 20:02:18.224: INFO: Pod pod-projected-secrets-4ce6e1c9-7465-4edd-840c-6088cc976a01 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:02:18.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9905" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3244,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:02:18.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:02:18.480: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e811adf8-08ed-4e70-8e71-2f76a2d2c920" in namespace "security-context-test-7972" to be "Succeeded or Failed" May 11 20:02:18.499: INFO: Pod "busybox-privileged-false-e811adf8-08ed-4e70-8e71-2f76a2d2c920": Phase="Pending", Reason="", readiness=false. Elapsed: 18.378322ms May 11 20:02:20.503: INFO: Pod "busybox-privileged-false-e811adf8-08ed-4e70-8e71-2f76a2d2c920": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022636365s May 11 20:02:22.761: INFO: Pod "busybox-privileged-false-e811adf8-08ed-4e70-8e71-2f76a2d2c920": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.280471477s May 11 20:02:22.761: INFO: Pod "busybox-privileged-false-e811adf8-08ed-4e70-8e71-2f76a2d2c920" satisfied condition "Succeeded or Failed" May 11 20:02:22.776: INFO: Got logs for pod "busybox-privileged-false-e811adf8-08ed-4e70-8e71-2f76a2d2c920": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:02:22.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7972" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":208,"skipped":3254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:02:22.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 11 20:02:22.892: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8446 /api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-label-changed b6258695-9bf4-4e0f-8470-5d2d30c08c08 3552886 0 2020-05-11 20:02:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 20:02:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:02:22.892: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8446 /api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-label-changed b6258695-9bf4-4e0f-8470-5d2d30c08c08 3552887 0 2020-05-11 20:02:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 20:02:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:02:22.892: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8446 /api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-label-changed b6258695-9bf4-4e0f-8470-5d2d30c08c08 3552888 0 2020-05-11 20:02:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 20:02:22 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 11 20:02:33.009: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8446 /api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-label-changed b6258695-9bf4-4e0f-8470-5d2d30c08c08 3552951 0 2020-05-11 20:02:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 20:02:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:02:33.009: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8446 /api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-label-changed b6258695-9bf4-4e0f-8470-5d2d30c08c08 3552952 0 2020-05-11 20:02:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 20:02:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:02:33.009: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8446 /api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-label-changed b6258695-9bf4-4e0f-8470-5d2d30c08c08 3552953 0 2020-05-11 20:02:22 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-11 20:02:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:02:33.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8446" for this suite. • [SLOW TEST:10.212 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":209,"skipped":3280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:02:33.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:02:44.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4214" for this suite. • [SLOW TEST:11.125 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":210,"skipped":3306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:02:44.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:02:44.256: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-90f5d1f8-e72c-447d-b168-d68d97b3c3c6" in namespace "security-context-test-9975" to be "Succeeded or Failed" May 11 20:02:44.264: INFO: Pod "alpine-nnp-false-90f5d1f8-e72c-447d-b168-d68d97b3c3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.743215ms May 11 20:02:46.268: INFO: Pod "alpine-nnp-false-90f5d1f8-e72c-447d-b168-d68d97b3c3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012167817s May 11 20:02:48.271: INFO: Pod "alpine-nnp-false-90f5d1f8-e72c-447d-b168-d68d97b3c3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014846888s May 11 20:02:50.275: INFO: Pod "alpine-nnp-false-90f5d1f8-e72c-447d-b168-d68d97b3c3c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018990272s May 11 20:02:50.275: INFO: Pod "alpine-nnp-false-90f5d1f8-e72c-447d-b168-d68d97b3c3c6" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:02:50.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9975" for this suite. • [SLOW TEST:6.147 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":211,"skipped":3336,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:02:50.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:02:50.350: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:02:56.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6833" for this suite. • [SLOW TEST:6.545 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":212,"skipped":3341,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:02:56.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7478 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 11 20:02:58.013: INFO: Found 0 stateful pods, waiting for 3 May 11 20:03:08.040: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 20:03:08.040: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 20:03:08.040: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 20:03:18.020: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 20:03:18.020: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 20:03:18.020: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 11 20:03:18.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7478 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 20:03:18.288: INFO: stderr: "I0511 20:03:18.172308 3668 log.go:172] (0xc000b0ba20) (0xc000af2aa0) Create stream\nI0511 20:03:18.172354 3668 log.go:172] (0xc000b0ba20) (0xc000af2aa0) Stream added, broadcasting: 1\nI0511 20:03:18.176412 3668 log.go:172] (0xc000b0ba20) Reply frame received for 1\nI0511 20:03:18.176488 3668 log.go:172] (0xc000b0ba20) (0xc000504140) Create stream\nI0511 20:03:18.176527 3668 log.go:172] (0xc000b0ba20) (0xc000504140) Stream added, broadcasting: 3\nI0511 20:03:18.177774 3668 log.go:172] (0xc000b0ba20) Reply frame received for 3\nI0511 20:03:18.177819 3668 log.go:172] (0xc000b0ba20) (0xc0005050e0) Create stream\nI0511 20:03:18.177830 3668 log.go:172] (0xc000b0ba20) (0xc0005050e0) Stream added, broadcasting: 5\nI0511 20:03:18.178685 3668 log.go:172] (0xc000b0ba20) Reply frame received for 5\nI0511 20:03:18.238557 3668 log.go:172] (0xc000b0ba20) Data frame received for 5\nI0511 20:03:18.238580 3668 log.go:172] (0xc0005050e0) (5) Data frame handling\nI0511 20:03:18.238594 3668 log.go:172] (0xc0005050e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 20:03:18.279752 3668 log.go:172] (0xc000b0ba20) Data frame received for 3\nI0511 20:03:18.279923 3668 log.go:172] (0xc000504140) (3) Data frame handling\nI0511 20:03:18.279980 3668 log.go:172] (0xc000504140) (3) Data frame sent\nI0511 20:03:18.280014 3668 log.go:172] (0xc000b0ba20) Data frame received for 3\nI0511 20:03:18.280058 3668 log.go:172] (0xc000504140) (3) Data frame handling\nI0511 20:03:18.280658 3668 log.go:172] (0xc000b0ba20) Data frame received for 5\nI0511 20:03:18.280765 3668 log.go:172] (0xc0005050e0) (5) Data frame handling\nI0511 20:03:18.283105 3668 log.go:172] (0xc000b0ba20) Data frame received for 1\nI0511 20:03:18.283128 3668 log.go:172] (0xc000af2aa0) (1) Data frame handling\nI0511 20:03:18.283145 3668 log.go:172] (0xc000af2aa0) (1) Data frame sent\nI0511 20:03:18.283174 3668 log.go:172] (0xc000b0ba20) (0xc000af2aa0) Stream removed, broadcasting: 1\nI0511 20:03:18.283202 3668 log.go:172] (0xc000b0ba20) Go away received\nI0511 20:03:18.283476 3668 log.go:172] (0xc000b0ba20) (0xc000af2aa0) Stream removed, broadcasting: 1\nI0511 20:03:18.283491 3668 log.go:172] (0xc000b0ba20) (0xc000504140) Stream removed, broadcasting: 3\nI0511 20:03:18.283498 3668 log.go:172] (0xc000b0ba20) (0xc0005050e0) Stream removed, broadcasting: 5\n" May 11 20:03:18.288: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 20:03:18.288: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 11 20:03:28.337: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 11 20:03:38.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7478 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 20:03:38.572: INFO: stderr: "I0511 20:03:38.506373 3690 log.go:172] (0xc0005449a0) (0xc0008f7680) Create stream\nI0511 20:03:38.506415 3690 log.go:172] (0xc0005449a0) (0xc0008f7680) Stream added, broadcasting: 1\nI0511 20:03:38.508852 3690 log.go:172] (0xc0005449a0) Reply frame received for 1\nI0511 20:03:38.508899 3690 log.go:172] (0xc0005449a0) (0xc0008f7b80) Create stream\nI0511 20:03:38.508915 3690 log.go:172] (0xc0005449a0) (0xc0008f7b80) Stream added, broadcasting: 3\nI0511 20:03:38.509902 3690 log.go:172] (0xc0005449a0) Reply frame received for 3\nI0511 20:03:38.509942 3690 log.go:172] (0xc0005449a0) (0xc0008d2b40) Create stream\nI0511 20:03:38.509955 3690 log.go:172] (0xc0005449a0) (0xc0008d2b40) Stream added, broadcasting: 5\nI0511 20:03:38.510866 3690 log.go:172] (0xc0005449a0) Reply frame received for 5\nI0511 20:03:38.563732 3690 log.go:172] (0xc0005449a0) Data frame received for 3\nI0511 20:03:38.563752 3690 log.go:172] (0xc0008f7b80) (3) Data frame handling\nI0511 20:03:38.563766 3690 log.go:172] (0xc0008f7b80) (3) Data frame sent\nI0511 20:03:38.563773 3690 log.go:172] (0xc0005449a0) Data frame received for 3\nI0511 20:03:38.563778 3690 log.go:172] (0xc0008f7b80) (3) Data frame handling\nI0511 20:03:38.563956 3690 log.go:172] (0xc0005449a0) Data frame received for 5\nI0511 20:03:38.563992 3690 log.go:172] (0xc0008d2b40) (5) Data frame handling\nI0511 20:03:38.564006 3690 log.go:172] (0xc0008d2b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 20:03:38.564212 3690 log.go:172] (0xc0005449a0) Data frame received for 5\nI0511 20:03:38.564244 3690 log.go:172] (0xc0008d2b40) (5) Data frame handling\nI0511 20:03:38.568307 3690 log.go:172] (0xc0005449a0) Data frame received for 1\nI0511 20:03:38.568362 3690 log.go:172] (0xc0008f7680) (1) Data frame handling\nI0511 20:03:38.568376 3690 log.go:172] (0xc0008f7680) (1) Data frame sent\nI0511 20:03:38.568392 3690 log.go:172] (0xc0005449a0) (0xc0008f7680) Stream removed, broadcasting: 1\nI0511 20:03:38.568417 3690 log.go:172] (0xc0005449a0) Go away received\nI0511 20:03:38.569003 3690 log.go:172] (0xc0005449a0) (0xc0008f7680) Stream removed, broadcasting: 1\nI0511 20:03:38.569027 3690 log.go:172] (0xc0005449a0) (0xc0008f7b80) Stream removed, broadcasting: 3\nI0511 20:03:38.569043 3690 log.go:172] (0xc0005449a0) (0xc0008d2b40) Stream removed, broadcasting: 5\n" May 11 20:03:38.572: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 20:03:38.572: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 20:03:48.588: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update May 11 20:03:48.588: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 20:03:48.588: INFO: Waiting for Pod statefulset-7478/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 20:03:58.634: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update May 11 20:03:58.634: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 20:04:08.595: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update STEP: Rolling back to a previous revision May 11 20:04:18.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7478 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 20:04:18.870: INFO: stderr: "I0511 20:04:18.743969 3710 log.go:172] (0xc000550000) (0xc0004d2dc0) Create stream\nI0511 20:04:18.744027 3710 log.go:172] (0xc000550000) (0xc0004d2dc0) Stream added, broadcasting: 1\nI0511 20:04:18.748942 3710 log.go:172] (0xc000550000) Reply frame received for 1\nI0511 20:04:18.749328 3710 log.go:172] (0xc000550000) (0xc0000dce60) Create stream\nI0511 20:04:18.749469 3710 log.go:172] (0xc000550000) (0xc0000dce60) Stream added, broadcasting: 3\nI0511 20:04:18.752325 3710 log.go:172] (0xc000550000) Reply frame received for 3\nI0511 20:04:18.752455 3710 log.go:172] (0xc000550000) (0xc000139720) Create stream\nI0511 20:04:18.752661 3710 log.go:172] (0xc000550000) (0xc000139720) Stream added, broadcasting: 5\nI0511 20:04:18.755772 3710 log.go:172] (0xc000550000) Reply frame received for 5\nI0511 20:04:18.834451 3710 log.go:172] (0xc000550000) Data frame received for 5\nI0511 20:04:18.834483 3710 log.go:172] (0xc000139720) (5) Data frame handling\nI0511 20:04:18.834503 3710 log.go:172] (0xc000139720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 20:04:18.864154 3710 log.go:172] (0xc000550000) Data frame received for 5\nI0511 20:04:18.864178 3710 log.go:172] (0xc000139720) (5) Data frame handling\nI0511 20:04:18.864192 3710 log.go:172] (0xc000550000) Data frame received for 3\nI0511 20:04:18.864197 3710 log.go:172] (0xc0000dce60) (3) Data frame handling\nI0511 20:04:18.864203 3710 log.go:172] (0xc0000dce60) (3) Data frame sent\nI0511 20:04:18.864211 3710 log.go:172] (0xc000550000) Data frame received for 3\nI0511 20:04:18.864219 3710 log.go:172] (0xc0000dce60) (3) Data frame handling\nI0511 20:04:18.866290 3710 log.go:172] (0xc000550000) Data frame received for 1\nI0511 20:04:18.866325 3710 log.go:172] (0xc0004d2dc0) (1) Data frame handling\nI0511 20:04:18.866348 3710 log.go:172] (0xc0004d2dc0) (1) Data frame sent\nI0511 20:04:18.866368 3710 log.go:172] (0xc000550000) (0xc0004d2dc0) Stream removed, broadcasting: 1\nI0511 20:04:18.866389 3710 log.go:172] (0xc000550000) Go away received\nI0511 20:04:18.866679 3710 log.go:172] (0xc000550000) (0xc0004d2dc0) Stream removed, broadcasting: 1\nI0511 20:04:18.866693 3710 log.go:172] (0xc000550000) (0xc0000dce60) Stream removed, broadcasting: 3\nI0511 20:04:18.866699 3710 log.go:172] (0xc000550000) (0xc000139720) Stream removed, broadcasting: 5\n" May 11 20:04:18.870: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 20:04:18.870: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 20:04:28.906: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 11 20:04:39.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7478 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 20:04:43.342: INFO: stderr: "I0511 20:04:43.240579 3731 log.go:172] (0xc000b46160) (0xc000250640) Create stream\nI0511 20:04:43.240637 3731 log.go:172] (0xc000b46160) (0xc000250640) Stream added, broadcasting: 1\nI0511 20:04:43.243397 3731 log.go:172] (0xc000b46160) Reply frame received for 1\nI0511 20:04:43.243435 3731 log.go:172] (0xc000b46160) (0xc00043f2c0) Create stream\nI0511 20:04:43.243448 3731 log.go:172] (0xc000b46160) (0xc00043f2c0) Stream added, broadcasting: 3\nI0511 20:04:43.253243 3731 log.go:172] (0xc000b46160) Reply frame received for 3\nI0511 20:04:43.253283 3731 log.go:172] (0xc000b46160) (0xc000250e60) Create stream\nI0511 20:04:43.253292 3731 log.go:172] (0xc000b46160) (0xc000250e60) Stream added, broadcasting: 5\nI0511 20:04:43.254144 3731 log.go:172] (0xc000b46160) Reply frame received for 5\nI0511 20:04:43.335669 3731 log.go:172] (0xc000b46160) Data frame received for 5\nI0511 20:04:43.335723 3731 log.go:172] (0xc000250e60) (5) Data frame handling\nI0511 20:04:43.335743 3731 log.go:172] (0xc000250e60) (5) Data frame sent\nI0511 20:04:43.335752 3731 log.go:172] (0xc000b46160) Data frame received for 5\nI0511 20:04:43.335759 3731 log.go:172] (0xc000250e60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 20:04:43.335781 3731 log.go:172] (0xc000b46160) Data frame received for 3\nI0511 20:04:43.335797 3731 log.go:172] (0xc00043f2c0) (3) Data frame handling\nI0511 20:04:43.335816 3731 log.go:172] (0xc00043f2c0) (3) Data frame sent\nI0511 20:04:43.335828 3731 log.go:172] (0xc000b46160) Data frame received for 3\nI0511 20:04:43.335834 3731 log.go:172] (0xc00043f2c0) (3) Data frame handling\nI0511 20:04:43.337945 3731 log.go:172] (0xc000b46160) Data frame received for 1\nI0511 20:04:43.337973 3731 log.go:172] (0xc000250640) (1) Data frame handling\nI0511 20:04:43.337991 3731 log.go:172] (0xc000250640) (1) Data frame sent\nI0511 20:04:43.338007 3731 log.go:172] (0xc000b46160) (0xc000250640) Stream removed, broadcasting: 1\nI0511 20:04:43.338028 3731 log.go:172] (0xc000b46160) Go away received\nI0511 20:04:43.338388 3731 log.go:172] (0xc000b46160) (0xc000250640) Stream removed, broadcasting: 1\nI0511 20:04:43.338407 3731 log.go:172] (0xc000b46160) (0xc00043f2c0) Stream removed, broadcasting: 3\nI0511 20:04:43.338418 3731 log.go:172] (0xc000b46160) (0xc000250e60) Stream removed, broadcasting: 5\n" May 11 20:04:43.342: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 20:04:43.342: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 20:04:53.448: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update May 11 20:04:53.448: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 20:04:53.448: INFO: Waiting for Pod statefulset-7478/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 20:04:53.448: INFO: Waiting for Pod statefulset-7478/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 20:05:03.455: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update May 11 20:05:03.455: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 20:05:03.455: INFO: Waiting for Pod statefulset-7478/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 20:05:13.459: INFO: Waiting for StatefulSet statefulset-7478/ss2 to complete update May 11 20:05:13.459: INFO: Waiting for Pod statefulset-7478/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 20:05:23.502: INFO: Deleting all statefulset in ns statefulset-7478 May 11 20:05:23.504: INFO: Scaling statefulset ss2 to 0 May 11 20:06:03.532: INFO: Waiting for statefulset status.replicas updated to 0 May 11 20:06:03.535: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:06:03.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7478" for this suite. • [SLOW TEST:186.724 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":213,"skipped":3343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:06:03.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-c7e41c31-1844-473a-a297-e736898bb680 STEP: Creating a pod to test consume configMaps May 11 20:06:03.668: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-370a61b0-59ea-4843-a717-df3785dd7851" in namespace "projected-9498" to be "Succeeded or Failed" May 11 20:06:03.672: INFO: Pod "pod-projected-configmaps-370a61b0-59ea-4843-a717-df3785dd7851": Phase="Pending", Reason="", readiness=false. Elapsed: 3.58265ms May 11 20:06:05.675: INFO: Pod "pod-projected-configmaps-370a61b0-59ea-4843-a717-df3785dd7851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006475942s May 11 20:06:07.678: INFO: Pod "pod-projected-configmaps-370a61b0-59ea-4843-a717-df3785dd7851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010160008s STEP: Saw pod success May 11 20:06:07.678: INFO: Pod "pod-projected-configmaps-370a61b0-59ea-4843-a717-df3785dd7851" satisfied condition "Succeeded or Failed" May 11 20:06:07.681: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-370a61b0-59ea-4843-a717-df3785dd7851 container projected-configmap-volume-test: STEP: delete the pod May 11 20:06:07.887: INFO: Waiting for pod pod-projected-configmaps-370a61b0-59ea-4843-a717-df3785dd7851 to disappear May 11 20:06:07.912: INFO: Pod pod-projected-configmaps-370a61b0-59ea-4843-a717-df3785dd7851 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:06:07.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9498" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":214,"skipped":3371,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:06:08.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3791.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3791.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3791.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3791.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3791.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3791.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3791.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3791.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3791.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3791.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 16.176.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.176.16_udp@PTR;check="$$(dig +tcp +noall +answer +search 16.176.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.176.16_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3791.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3791.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3791.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3791.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3791.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3791.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3791.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3791.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3791.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3791.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3791.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 16.176.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.176.16_udp@PTR;check="$$(dig +tcp +noall +answer +search 16.176.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.176.16_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 20:06:16.837: INFO: Unable to read wheezy_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:16.840: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:16.843: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:16.845: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:16.864: INFO: Unable to read jessie_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:16.866: INFO: Unable to read jessie_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:16.869: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:16.871: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:16.889: INFO: Lookups using dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3 failed for: [wheezy_udp@dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_udp@dns-test-service.dns-3791.svc.cluster.local jessie_tcp@dns-test-service.dns-3791.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local] May 11 20:06:21.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:21.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:21.980: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:21.983: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:22.131: INFO: Unable to read jessie_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:22.133: INFO: Unable to read jessie_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:22.135: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:22.138: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:22.163: INFO: Lookups using dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3 failed for: [wheezy_udp@dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_udp@dns-test-service.dns-3791.svc.cluster.local jessie_tcp@dns-test-service.dns-3791.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local] May 11 20:06:26.895: INFO: Unable to read wheezy_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:26.899: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:26.902: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:26.906: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:26.927: INFO: Unable to read jessie_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:26.930: INFO: Unable to read jessie_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:26.933: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:26.935: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:26.954: INFO: Lookups using dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3 failed for: [wheezy_udp@dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_udp@dns-test-service.dns-3791.svc.cluster.local jessie_tcp@dns-test-service.dns-3791.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local] May 11 20:06:31.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:31.898: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:31.902: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:31.904: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:31.932: INFO: Unable to read jessie_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:31.935: INFO: Unable to read jessie_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:31.938: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:31.941: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:31.959: INFO: Lookups using dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3 failed for: [wheezy_udp@dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_udp@dns-test-service.dns-3791.svc.cluster.local jessie_tcp@dns-test-service.dns-3791.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local] May 11 20:06:36.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:36.896: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:36.899: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:36.902: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:36.938: INFO: Unable to read jessie_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:36.940: INFO: Unable to read jessie_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:36.943: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:36.946: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:36.961: INFO: Lookups using dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3 failed for: [wheezy_udp@dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_udp@dns-test-service.dns-3791.svc.cluster.local jessie_tcp@dns-test-service.dns-3791.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local] May 11 20:06:41.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:41.896: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:41.899: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:41.901: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:41.916: INFO: Unable to read jessie_udp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:41.919: INFO: Unable to read jessie_tcp@dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:41.921: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:41.923: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local from pod dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3: the server could not find the requested resource (get pods dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3) May 11 20:06:41.937: INFO: Lookups using dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3 failed for: [wheezy_udp@dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@dns-test-service.dns-3791.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_udp@dns-test-service.dns-3791.svc.cluster.local jessie_tcp@dns-test-service.dns-3791.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3791.svc.cluster.local] May 11 20:06:47.593: INFO: DNS probes using dns-3791/dns-test-a8193736-cd28-4e3b-98c1-3444d74cddc3 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:06:49.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3791" for this suite. • [SLOW TEST:41.052 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":215,"skipped":3379,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:06:49.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:06:49.406: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 11 20:06:54.408: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 20:06:54.409: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 11 20:06:54.442: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6326 /apis/apps/v1/namespaces/deployment-6326/deployments/test-cleanup-deployment 61366883-fe18-4bea-8225-c46e5e7b00e4 3554955 1 2020-05-11 20:06:54 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-11 20:06:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00437bbb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 11 20:06:54.908: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-6326 /apis/apps/v1/namespaces/deployment-6326/replicasets/test-cleanup-deployment-6688745694 40ce507b-9c79-472a-98b3-a4347488457c 3554957 1 2020-05-11 20:06:54 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 61366883-fe18-4bea-8225-c46e5e7b00e4 0xc0045ac417 0xc0045ac418}] [] [{kube-controller-manager Update apps/v1 2020-05-11 20:06:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61366883-fe18-4bea-8225-c46e5e7b00e4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045ac4a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 20:06:54.908: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 11 20:06:54.909: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6326 /apis/apps/v1/namespaces/deployment-6326/replicasets/test-cleanup-controller a37d27de-c19e-48d3-8a58-d4cc9a7d29d0 3554956 1 2020-05-11 20:06:49 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 61366883-fe18-4bea-8225-c46e5e7b00e4 0xc0045ac2ff 0xc0045ac310}] [] [{e2e.test Update apps/v1 2020-05-11 20:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-11 20:06:54 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"61366883-fe18-4bea-8225-c46e5e7b00e4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0045ac3a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 20:06:55.412: INFO: Pod "test-cleanup-controller-t66w4" is available: &Pod{ObjectMeta:{test-cleanup-controller-t66w4 test-cleanup-controller- deployment-6326 /api/v1/namespaces/deployment-6326/pods/test-cleanup-controller-t66w4 a8a7621a-54a8-4f83-8830-cfc862a1d2fd 3554917 0 2020-05-11 20:06:49 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller a37d27de-c19e-48d3-8a58-d4cc9a7d29d0 0xc0045ac977 0xc0045ac978}] [] [{kube-controller-manager Update v1 2020-05-11 20:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a37d27de-c19e-48d3-8a58-d4cc9a7d29d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 20:06:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.210\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z9s5q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z9s5q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z9s5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:06:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:06:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.210,StartTime:2020-05-11 20:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:06:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9f6d3663d4a4efd30f0c0b46532218d40193edbf0a8dbc51af4728085740339f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 20:06:55.413: INFO: Pod "test-cleanup-deployment-6688745694-pkwcl" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-pkwcl test-cleanup-deployment-6688745694- deployment-6326 /api/v1/namespaces/deployment-6326/pods/test-cleanup-deployment-6688745694-pkwcl 57ea4ff5-c7d4-4b8c-a6bd-a67299e19804 3554962 0 2020-05-11 20:06:54 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 40ce507b-9c79-472a-98b3-a4347488457c 0xc0045acb67 0xc0045acb68}] [] [{kube-controller-manager Update v1 2020-05-11 20:06:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40ce507b-9c79-472a-98b3-a4347488457c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z9s5q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z9s5q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z9s5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:06:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:06:55.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6326" for this suite. • [SLOW TEST:6.408 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":216,"skipped":3381,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:06:55.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 20:06:55.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c" in namespace "downward-api-4733" to be "Succeeded or Failed" May 11 20:06:55.687: INFO: Pod "downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054079ms May 11 20:06:57.746: INFO: Pod "downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062139957s May 11 20:07:00.040: INFO: Pod "downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355363699s May 11 20:07:02.219: INFO: Pod "downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.535308863s STEP: Saw pod success May 11 20:07:02.220: INFO: Pod "downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c" satisfied condition "Succeeded or Failed" May 11 20:07:02.251: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c container client-container: STEP: delete the pod May 11 20:07:02.396: INFO: Waiting for pod downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c to disappear May 11 20:07:02.472: INFO: Pod downwardapi-volume-fcee7c41-6a5e-4c31-8e7e-3721df9f909c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:07:02.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4733" for this suite. • [SLOW TEST:7.123 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":217,"skipped":3386,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:07:02.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:07:03.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-720" for this suite. STEP: Destroying namespace "nspatchtest-49b73d48-700c-4aed-9939-c059d6dddefb-1399" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":218,"skipped":3399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:07:03.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:07:03.265: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:07:07.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2182" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":219,"skipped":3424,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:07:07.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0511 20:07:08.759689 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 20:07:08.759: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:07:08.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8332" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":220,"skipped":3424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:07:08.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4626 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4626 STEP: creating replication controller externalsvc in namespace services-4626 I0511 20:07:10.035904 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4626, replica count: 2 I0511 20:07:13.086336 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:07:16.086507 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 11 20:07:16.107: INFO: Creating new exec pod May 11 20:07:20.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4626 execpodbfpkj -- /bin/sh -x -c nslookup clusterip-service' May 11 20:07:20.388: INFO: stderr: "I0511 20:07:20.294869 3765 log.go:172] (0xc000a44f20) (0xc000b48140) Create stream\nI0511 20:07:20.294935 3765 log.go:172] (0xc000a44f20) (0xc000b48140) Stream added, broadcasting: 1\nI0511 20:07:20.301963 3765 log.go:172] (0xc000a44f20) Reply frame received for 1\nI0511 20:07:20.302018 3765 log.go:172] (0xc000a44f20) (0xc0007110e0) Create stream\nI0511 20:07:20.302028 3765 log.go:172] (0xc000a44f20) (0xc0007110e0) Stream added, broadcasting: 3\nI0511 20:07:20.303001 3765 log.go:172] (0xc000a44f20) Reply frame received for 3\nI0511 20:07:20.303036 3765 log.go:172] (0xc000a44f20) (0xc00051c460) Create stream\nI0511 20:07:20.303048 3765 log.go:172] (0xc000a44f20) (0xc00051c460) Stream added, broadcasting: 5\nI0511 20:07:20.304247 3765 log.go:172] (0xc000a44f20) Reply frame received for 5\nI0511 20:07:20.377101 3765 log.go:172] (0xc000a44f20) Data frame received for 5\nI0511 20:07:20.377235 3765 log.go:172] (0xc00051c460) (5) Data frame handling\nI0511 20:07:20.377399 3765 log.go:172] (0xc00051c460) (5) Data frame sent\n+ nslookup clusterip-service\nI0511 20:07:20.381814 3765 log.go:172] (0xc000a44f20) Data frame received for 3\nI0511 20:07:20.381838 3765 log.go:172] (0xc0007110e0) (3) Data frame handling\nI0511 20:07:20.381850 3765 log.go:172] (0xc0007110e0) (3) Data frame sent\nI0511 20:07:20.382420 3765 log.go:172] (0xc000a44f20) Data frame received for 3\nI0511 20:07:20.382471 3765 log.go:172] (0xc0007110e0) (3) Data frame handling\nI0511 20:07:20.382507 3765 log.go:172] (0xc0007110e0) (3) Data frame sent\nI0511 20:07:20.382776 3765 log.go:172] (0xc000a44f20) Data frame received for 5\nI0511 20:07:20.382797 3765 log.go:172] (0xc00051c460) (5) Data frame handling\nI0511 20:07:20.382810 3765 log.go:172] (0xc000a44f20) Data frame received for 3\nI0511 20:07:20.382823 3765 log.go:172] (0xc0007110e0) (3) Data frame handling\nI0511 20:07:20.384573 3765 log.go:172] (0xc000a44f20) Data frame received for 1\nI0511 20:07:20.384597 3765 log.go:172] (0xc000b48140) (1) Data frame handling\nI0511 20:07:20.384623 3765 log.go:172] (0xc000b48140) (1) Data frame sent\nI0511 20:07:20.384643 3765 log.go:172] (0xc000a44f20) (0xc000b48140) Stream removed, broadcasting: 1\nI0511 20:07:20.384692 3765 log.go:172] (0xc000a44f20) Go away received\nI0511 20:07:20.384943 3765 log.go:172] (0xc000a44f20) (0xc000b48140) Stream removed, broadcasting: 1\nI0511 20:07:20.384961 3765 log.go:172] (0xc000a44f20) (0xc0007110e0) Stream removed, broadcasting: 3\nI0511 20:07:20.384975 3765 log.go:172] (0xc000a44f20) (0xc00051c460) Stream removed, broadcasting: 5\n" May 11 20:07:20.388: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4626.svc.cluster.local\tcanonical name = externalsvc.services-4626.svc.cluster.local.\nName:\texternalsvc.services-4626.svc.cluster.local\nAddress: 10.96.16.211\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4626, will wait for the garbage collector to delete the pods May 11 20:07:20.526: INFO: Deleting ReplicationController externalsvc took: 5.50239ms May 11 20:07:20.626: INFO: Terminating ReplicationController externalsvc pods took: 100.194768ms May 11 20:07:35.365: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:07:35.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4626" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.627 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":221,"skipped":3475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:07:35.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 11 20:07:39.868: INFO: &Pod{ObjectMeta:{send-events-43cf2ddd-a187-4547-8f14-6e915ebe9b32 events-4697 /api/v1/namespaces/events-4697/pods/send-events-43cf2ddd-a187-4547-8f14-6e915ebe9b32 1846bed2-373b-40e0-b831-bd35db024289 3555376 0 2020-05-11 20:07:35 +0000 UTC map[name:foo time:632225709] map[] [] [] [{e2e.test Update v1 2020-05-11 20:07:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-11 20:07:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-djsn7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-djsn7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-djsn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:07:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:07:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:07:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 20:07:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.91,StartTime:2020-05-11 20:07:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 20:07:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://e94082f6f8e110cfb70a1d44ec9160fa567a99643b99309023d0f77ea237a785,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 11 20:07:41.873: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 11 20:07:43.876: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:07:43.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4697" for this suite. • [SLOW TEST:8.869 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":222,"skipped":3498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:07:44.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-201 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 11 20:07:45.040: INFO: Found 0 stateful pods, waiting for 3 May 11 20:07:55.046: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 20:07:55.046: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 20:07:55.046: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 20:08:05.044: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 20:08:05.044: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 20:08:05.044: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 11 20:08:05.091: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 11 20:08:15.139: INFO: Updating stateful set ss2 May 11 20:08:15.224: INFO: Waiting for Pod statefulset-201/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 11 20:08:36.247: INFO: Found 2 stateful pods, waiting for 3 May 11 20:08:46.252: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 20:08:46.252: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 20:08:46.252: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 11 20:08:46.518: INFO: Updating stateful set ss2 May 11 20:08:46.729: INFO: Waiting for Pod statefulset-201/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 20:08:56.919: INFO: Updating stateful set ss2 May 11 20:08:57.288: INFO: Waiting for StatefulSet statefulset-201/ss2 to complete update May 11 20:08:57.288: INFO: Waiting for Pod statefulset-201/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 20:09:07.379: INFO: Waiting for StatefulSet statefulset-201/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 20:09:17.297: INFO: Deleting all statefulset in ns statefulset-201 May 11 20:09:17.299: INFO: Scaling statefulset ss2 to 0 May 11 20:09:47.412: INFO: Waiting for statefulset status.replicas updated to 0 May 11 20:09:47.415: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:09:47.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-201" for this suite. • [SLOW TEST:123.192 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":223,"skipped":3522,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:09:47.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7506 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7506 I0511 20:09:47.795189 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7506, replica count: 2 I0511 20:09:50.845550 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:09:53.845735 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:09:56.845910 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 20:09:56.845: INFO: Creating new exec pod May 11 20:10:03.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7506 execpodc5k8g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 11 20:10:04.209: INFO: stderr: "I0511 20:10:04.126682 3786 log.go:172] (0xc000b78f20) (0xc000b1c500) Create stream\nI0511 20:10:04.126716 3786 log.go:172] (0xc000b78f20) (0xc000b1c500) Stream added, broadcasting: 1\nI0511 20:10:04.130719 3786 log.go:172] (0xc000b78f20) Reply frame received for 1\nI0511 20:10:04.130765 3786 log.go:172] (0xc000b78f20) (0xc000528280) Create stream\nI0511 20:10:04.130781 3786 log.go:172] (0xc000b78f20) (0xc000528280) Stream added, broadcasting: 3\nI0511 20:10:04.131775 3786 log.go:172] (0xc000b78f20) Reply frame received for 3\nI0511 20:10:04.131809 3786 log.go:172] (0xc000b78f20) (0xc000529220) Create stream\nI0511 20:10:04.131822 3786 log.go:172] (0xc000b78f20) (0xc000529220) Stream added, broadcasting: 5\nI0511 20:10:04.132713 3786 log.go:172] (0xc000b78f20) Reply frame received for 5\nI0511 20:10:04.202739 3786 log.go:172] (0xc000b78f20) Data frame received for 5\nI0511 20:10:04.202770 3786 log.go:172] (0xc000529220) (5) Data frame handling\nI0511 20:10:04.202797 3786 log.go:172] (0xc000529220) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0511 20:10:04.203181 3786 log.go:172] (0xc000b78f20) Data frame received for 5\nI0511 20:10:04.203209 3786 log.go:172] (0xc000529220) (5) Data frame handling\nI0511 20:10:04.203256 3786 log.go:172] (0xc000529220) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0511 20:10:04.203582 3786 log.go:172] (0xc000b78f20) Data frame received for 5\nI0511 20:10:04.203597 3786 log.go:172] (0xc000529220) (5) Data frame handling\nI0511 20:10:04.203843 3786 log.go:172] (0xc000b78f20) Data frame received for 3\nI0511 20:10:04.203928 3786 log.go:172] (0xc000528280) (3) Data frame handling\nI0511 20:10:04.205695 3786 log.go:172] (0xc000b78f20) Data frame received for 1\nI0511 20:10:04.205713 3786 log.go:172] (0xc000b1c500) (1) Data frame handling\nI0511 20:10:04.205735 3786 log.go:172] (0xc000b1c500) (1) Data frame sent\nI0511 20:10:04.205753 3786 log.go:172] (0xc000b78f20) (0xc000b1c500) Stream removed, broadcasting: 1\nI0511 20:10:04.205773 3786 log.go:172] (0xc000b78f20) Go away received\nI0511 20:10:04.206055 3786 log.go:172] (0xc000b78f20) (0xc000b1c500) Stream removed, broadcasting: 1\nI0511 20:10:04.206069 3786 log.go:172] (0xc000b78f20) (0xc000528280) Stream removed, broadcasting: 3\nI0511 20:10:04.206077 3786 log.go:172] (0xc000b78f20) (0xc000529220) Stream removed, broadcasting: 5\n" May 11 20:10:04.209: INFO: stdout: "" May 11 20:10:04.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7506 execpodc5k8g -- /bin/sh -x -c nc -zv -t -w 2 10.100.209.249 80' May 11 20:10:04.396: INFO: stderr: "I0511 20:10:04.320757 3808 log.go:172] (0xc0008e2790) (0xc00068fae0) Create stream\nI0511 20:10:04.320799 3808 log.go:172] (0xc0008e2790) (0xc00068fae0) Stream added, broadcasting: 1\nI0511 20:10:04.327944 3808 log.go:172] (0xc0008e2790) Reply frame received for 1\nI0511 20:10:04.327994 3808 log.go:172] (0xc0008e2790) (0xc000684460) Create stream\nI0511 20:10:04.328025 3808 log.go:172] (0xc0008e2790) (0xc000684460) Stream added, broadcasting: 3\nI0511 20:10:04.328781 3808 log.go:172] (0xc0008e2790) Reply frame received for 3\nI0511 20:10:04.328817 3808 log.go:172] (0xc0008e2790) (0xc0004b79a0) Create stream\nI0511 20:10:04.328827 3808 log.go:172] (0xc0008e2790) (0xc0004b79a0) Stream added, broadcasting: 5\nI0511 20:10:04.329608 3808 log.go:172] (0xc0008e2790) Reply frame received for 5\nI0511 20:10:04.389807 3808 log.go:172] (0xc0008e2790) Data frame received for 5\nI0511 20:10:04.389824 3808 log.go:172] (0xc0004b79a0) (5) Data frame handling\nI0511 20:10:04.389831 3808 log.go:172] (0xc0004b79a0) (5) Data frame sent\nI0511 20:10:04.389837 3808 log.go:172] (0xc0008e2790) Data frame received for 5\nI0511 20:10:04.389842 3808 log.go:172] (0xc0004b79a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.209.249 80\nConnection to 10.100.209.249 80 port [tcp/http] succeeded!\nI0511 20:10:04.389893 3808 log.go:172] (0xc0008e2790) Data frame received for 3\nI0511 20:10:04.389912 3808 log.go:172] (0xc000684460) (3) Data frame handling\nI0511 20:10:04.391231 3808 log.go:172] (0xc0008e2790) Data frame received for 1\nI0511 20:10:04.391259 3808 log.go:172] (0xc00068fae0) (1) Data frame handling\nI0511 20:10:04.391289 3808 log.go:172] (0xc00068fae0) (1) Data frame sent\nI0511 20:10:04.391314 3808 log.go:172] (0xc0008e2790) (0xc00068fae0) Stream removed, broadcasting: 1\nI0511 20:10:04.391504 3808 log.go:172] (0xc0008e2790) Go away received\nI0511 20:10:04.391832 3808 log.go:172] (0xc0008e2790) (0xc00068fae0) Stream removed, broadcasting: 1\nI0511 20:10:04.391860 3808 log.go:172] (0xc0008e2790) (0xc000684460) Stream removed, broadcasting: 3\nI0511 20:10:04.391880 3808 log.go:172] (0xc0008e2790) (0xc0004b79a0) Stream removed, broadcasting: 5\n" May 11 20:10:04.396: INFO: stdout: "" May 11 20:10:04.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7506 execpodc5k8g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32058' May 11 20:10:04.594: INFO: stderr: "I0511 20:10:04.527645 3827 log.go:172] (0xc000a9b1e0) (0xc0004b4d20) Create stream\nI0511 20:10:04.527691 3827 log.go:172] (0xc000a9b1e0) (0xc0004b4d20) Stream added, broadcasting: 1\nI0511 20:10:04.531314 3827 log.go:172] (0xc000a9b1e0) Reply frame received for 1\nI0511 20:10:04.531344 3827 log.go:172] (0xc000a9b1e0) (0xc00016e000) Create stream\nI0511 20:10:04.531351 3827 log.go:172] (0xc000a9b1e0) (0xc00016e000) Stream added, broadcasting: 3\nI0511 20:10:04.531971 3827 log.go:172] (0xc000a9b1e0) Reply frame received for 3\nI0511 20:10:04.532014 3827 log.go:172] (0xc000a9b1e0) (0xc0004af720) Create stream\nI0511 20:10:04.532038 3827 log.go:172] (0xc000a9b1e0) (0xc0004af720) Stream added, broadcasting: 5\nI0511 20:10:04.532819 3827 log.go:172] (0xc000a9b1e0) Reply frame received for 5\nI0511 20:10:04.586664 3827 log.go:172] (0xc000a9b1e0) Data frame received for 5\nI0511 20:10:04.586716 3827 log.go:172] (0xc0004af720) (5) Data frame handling\nI0511 20:10:04.586752 3827 log.go:172] (0xc0004af720) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32058\nI0511 20:10:04.586887 3827 log.go:172] (0xc000a9b1e0) Data frame received for 5\nI0511 20:10:04.586908 3827 log.go:172] (0xc0004af720) (5) Data frame handling\nI0511 20:10:04.586942 3827 log.go:172] (0xc0004af720) (5) Data frame sent\nConnection to 172.17.0.13 32058 port [tcp/32058] succeeded!\nI0511 20:10:04.587585 3827 log.go:172] (0xc000a9b1e0) Data frame received for 5\nI0511 20:10:04.587736 3827 log.go:172] (0xc0004af720) (5) Data frame handling\nI0511 20:10:04.587893 3827 log.go:172] (0xc000a9b1e0) Data frame received for 3\nI0511 20:10:04.587970 3827 log.go:172] (0xc00016e000) (3) Data frame handling\nI0511 20:10:04.589612 3827 log.go:172] (0xc000a9b1e0) Data frame received for 1\nI0511 20:10:04.589637 3827 log.go:172] (0xc0004b4d20) (1) Data frame handling\nI0511 20:10:04.589663 3827 log.go:172] (0xc0004b4d20) (1) Data frame sent\nI0511 20:10:04.589705 3827 log.go:172] (0xc000a9b1e0) (0xc0004b4d20) Stream removed, broadcasting: 1\nI0511 20:10:04.589928 3827 log.go:172] (0xc000a9b1e0) Go away received\nI0511 20:10:04.590035 3827 log.go:172] (0xc000a9b1e0) (0xc0004b4d20) Stream removed, broadcasting: 1\nI0511 20:10:04.590056 3827 log.go:172] (0xc000a9b1e0) (0xc00016e000) Stream removed, broadcasting: 3\nI0511 20:10:04.590069 3827 log.go:172] (0xc000a9b1e0) (0xc0004af720) Stream removed, broadcasting: 5\n" May 11 20:10:04.594: INFO: stdout: "" May 11 20:10:04.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7506 execpodc5k8g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32058' May 11 20:10:04.775: INFO: stderr: "I0511 20:10:04.708895 3848 log.go:172] (0xc000ba98c0) (0xc0006d6640) Create stream\nI0511 20:10:04.708950 3848 log.go:172] (0xc000ba98c0) (0xc0006d6640) Stream added, broadcasting: 1\nI0511 20:10:04.712074 3848 log.go:172] (0xc000ba98c0) Reply frame received for 1\nI0511 20:10:04.712097 3848 log.go:172] (0xc000ba98c0) (0xc000685540) Create stream\nI0511 20:10:04.712103 3848 log.go:172] (0xc000ba98c0) (0xc000685540) Stream added, broadcasting: 3\nI0511 20:10:04.712743 3848 log.go:172] (0xc000ba98c0) Reply frame received for 3\nI0511 20:10:04.712762 3848 log.go:172] (0xc000ba98c0) (0xc00058e280) Create stream\nI0511 20:10:04.712770 3848 log.go:172] (0xc000ba98c0) (0xc00058e280) Stream added, broadcasting: 5\nI0511 20:10:04.713669 3848 log.go:172] (0xc000ba98c0) Reply frame received for 5\nI0511 20:10:04.770034 3848 log.go:172] (0xc000ba98c0) Data frame received for 3\nI0511 20:10:04.770054 3848 log.go:172] (0xc000685540) (3) Data frame handling\nI0511 20:10:04.770226 3848 log.go:172] (0xc000ba98c0) Data frame received for 5\nI0511 20:10:04.770235 3848 log.go:172] (0xc00058e280) (5) Data frame handling\nI0511 20:10:04.770255 3848 log.go:172] (0xc00058e280) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32058\nConnection to 172.17.0.12 32058 port [tcp/32058] succeeded!\nI0511 20:10:04.770437 3848 log.go:172] (0xc000ba98c0) Data frame received for 5\nI0511 20:10:04.770456 3848 log.go:172] (0xc00058e280) (5) Data frame handling\nI0511 20:10:04.771749 3848 log.go:172] (0xc000ba98c0) Data frame received for 1\nI0511 20:10:04.771762 3848 log.go:172] (0xc0006d6640) (1) Data frame handling\nI0511 20:10:04.771777 3848 log.go:172] (0xc0006d6640) (1) Data frame sent\nI0511 20:10:04.771787 3848 log.go:172] (0xc000ba98c0) (0xc0006d6640) Stream removed, broadcasting: 1\nI0511 20:10:04.771854 3848 log.go:172] (0xc000ba98c0) Go away received\nI0511 20:10:04.772021 3848 log.go:172] (0xc000ba98c0) (0xc0006d6640) Stream removed, broadcasting: 1\nI0511 20:10:04.772032 3848 log.go:172] (0xc000ba98c0) (0xc000685540) Stream removed, broadcasting: 3\nI0511 20:10:04.772039 3848 log.go:172] (0xc000ba98c0) (0xc00058e280) Stream removed, broadcasting: 5\n" May 11 20:10:04.775: INFO: stdout: "" May 11 20:10:04.775: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:10:04.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7506" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:17.525 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":224,"skipped":3529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:10:05.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:10:05.445: INFO: Creating ReplicaSet my-hostname-basic-2f0dc906-50b3-4668-8fee-dfa601d9820d May 11 20:10:05.531: INFO: Pod name my-hostname-basic-2f0dc906-50b3-4668-8fee-dfa601d9820d: Found 0 pods out of 1 May 11 20:10:10.610: INFO: Pod name my-hostname-basic-2f0dc906-50b3-4668-8fee-dfa601d9820d: Found 1 pods out of 1 May 11 20:10:10.610: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2f0dc906-50b3-4668-8fee-dfa601d9820d" is running May 11 20:10:10.671: INFO: Pod "my-hostname-basic-2f0dc906-50b3-4668-8fee-dfa601d9820d-xpxkp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 20:10:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 20:10:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 20:10:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 20:10:05 +0000 UTC Reason: Message:}]) May 11 20:10:10.671: INFO: Trying to dial the pod May 11 20:10:15.682: INFO: Controller my-hostname-basic-2f0dc906-50b3-4668-8fee-dfa601d9820d: Got expected result from replica 1 [my-hostname-basic-2f0dc906-50b3-4668-8fee-dfa601d9820d-xpxkp]: "my-hostname-basic-2f0dc906-50b3-4668-8fee-dfa601d9820d-xpxkp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:10:15.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1610" for this suite. • [SLOW TEST:10.684 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":225,"skipped":3559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:10:15.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:10:15.907: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:10:16.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6850" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":226,"skipped":3592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:10:16.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-71d07503-a61e-4f6a-ad4d-57a9626f8482 in namespace container-probe-9753 May 11 20:10:21.352: INFO: Started pod liveness-71d07503-a61e-4f6a-ad4d-57a9626f8482 in namespace container-probe-9753 STEP: checking the pod's current state and verifying that restartCount is present May 11 20:10:21.354: INFO: Initial restart count of pod liveness-71d07503-a61e-4f6a-ad4d-57a9626f8482 is 0 May 11 20:10:41.434: INFO: Restart count of pod container-probe-9753/liveness-71d07503-a61e-4f6a-ad4d-57a9626f8482 is now 1 (20.080123492s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:10:41.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9753" for this suite. • [SLOW TEST:24.550 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":227,"skipped":3692,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:10:41.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 11 20:10:41.989: INFO: Waiting up to 5m0s for pod "client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d" in namespace "containers-9651" to be "Succeeded or Failed" May 11 20:10:42.026: INFO: Pod "client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.474792ms May 11 20:10:44.155: INFO: Pod "client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166295011s May 11 20:10:46.159: INFO: Pod "client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169961525s May 11 20:10:48.177: INFO: Pod "client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18787504s STEP: Saw pod success May 11 20:10:48.177: INFO: Pod "client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d" satisfied condition "Succeeded or Failed" May 11 20:10:48.186: INFO: Trying to get logs from node latest-worker2 pod client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d container test-container: STEP: delete the pod May 11 20:10:48.288: INFO: Waiting for pod client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d to disappear May 11 20:10:48.325: INFO: Pod client-containers-3fa9f8b9-7ab9-4d9e-9e43-c08c81a5961d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:10:48.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9651" for this suite. • [SLOW TEST:6.833 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":228,"skipped":3714,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:10:48.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 20:10:56.735: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 20:10:56.743: INFO: Pod pod-with-prestop-http-hook still exists May 11 20:10:58.743: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 20:10:58.747: INFO: Pod pod-with-prestop-http-hook still exists May 11 20:11:00.743: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 20:11:00.747: INFO: Pod pod-with-prestop-http-hook still exists May 11 20:11:02.743: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 20:11:02.748: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:11:02.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1430" for this suite. • [SLOW TEST:14.432 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":229,"skipped":3728,"failed":0} SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:11:02.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:11:02.811: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7112 I0511 20:11:02.833646 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7112, replica count: 1 I0511 20:11:03.883998 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:11:04.884208 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:11:05.884412 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:11:06.884611 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 20:11:07.007: INFO: Created: latency-svc-q89ss May 11 20:11:07.019: INFO: Got endpoints: latency-svc-q89ss [34.446079ms] May 11 20:11:07.095: INFO: Created: latency-svc-q9b9d May 11 20:11:07.120: INFO: Got endpoints: latency-svc-q9b9d [100.630279ms] May 11 20:11:07.120: INFO: Created: latency-svc-q4dr5 May 11 20:11:07.133: INFO: Got endpoints: latency-svc-q4dr5 [114.028295ms] May 11 20:11:07.155: INFO: Created: latency-svc-wvvfc May 11 20:11:07.171: INFO: Got endpoints: latency-svc-wvvfc [151.901896ms] May 11 20:11:07.187: INFO: Created: latency-svc-d8xs8 May 11 20:11:07.227: INFO: Got endpoints: latency-svc-d8xs8 [207.942195ms] May 11 20:11:07.241: INFO: Created: latency-svc-f5j8d May 11 20:11:07.253: INFO: Got endpoints: latency-svc-f5j8d [234.392127ms] May 11 20:11:07.271: INFO: Created: latency-svc-ssl2k May 11 20:11:07.283: INFO: Got endpoints: latency-svc-ssl2k [264.465681ms] May 11 20:11:07.311: INFO: Created: latency-svc-r4g2v May 11 20:11:07.365: INFO: Got endpoints: latency-svc-r4g2v [345.911923ms] May 11 20:11:07.373: INFO: Created: latency-svc-lqvfq May 11 20:11:07.388: INFO: Got endpoints: latency-svc-lqvfq [368.860375ms] May 11 20:11:07.426: INFO: Created: latency-svc-7bkqw May 11 20:11:07.463: INFO: Got endpoints: latency-svc-7bkqw [444.479147ms] May 11 20:11:07.527: INFO: Created: latency-svc-njcxg May 11 20:11:07.563: INFO: Got endpoints: latency-svc-njcxg [544.065574ms] May 11 20:11:07.600: INFO: Created: latency-svc-flwvx May 11 20:11:07.622: INFO: Got endpoints: latency-svc-flwvx [603.441951ms] May 11 20:11:07.683: INFO: Created: latency-svc-f7hjf May 11 20:11:07.688: INFO: Got endpoints: latency-svc-f7hjf [669.32864ms] May 11 20:11:07.716: INFO: Created: latency-svc-9kqf6 May 11 20:11:07.732: INFO: Got endpoints: latency-svc-9kqf6 [712.506366ms] May 11 20:11:07.757: INFO: Created: latency-svc-hdqq4 May 11 20:11:07.826: INFO: Got endpoints: latency-svc-hdqq4 [807.078048ms] May 11 20:11:07.863: INFO: Created: latency-svc-h62hr May 11 20:11:07.889: INFO: Got endpoints: latency-svc-h62hr [869.701664ms] May 11 20:11:07.913: INFO: Created: latency-svc-n2p6l May 11 20:11:07.960: INFO: Got endpoints: latency-svc-n2p6l [839.984114ms] May 11 20:11:08.033: INFO: Created: latency-svc-c2wk2 May 11 20:11:08.219: INFO: Got endpoints: latency-svc-c2wk2 [1.085963591s] May 11 20:11:08.268: INFO: Created: latency-svc-jgnb7 May 11 20:11:08.308: INFO: Got endpoints: latency-svc-jgnb7 [1.137328s] May 11 20:11:08.468: INFO: Created: latency-svc-95c7f May 11 20:11:08.554: INFO: Got endpoints: latency-svc-95c7f [1.327251038s] May 11 20:11:08.682: INFO: Created: latency-svc-k655b May 11 20:11:08.686: INFO: Got endpoints: latency-svc-k655b [1.432660908s] May 11 20:11:08.904: INFO: Created: latency-svc-kw6z2 May 11 20:11:08.950: INFO: Got endpoints: latency-svc-kw6z2 [1.666819586s] May 11 20:11:09.119: INFO: Created: latency-svc-xbmwq May 11 20:11:09.168: INFO: Got endpoints: latency-svc-xbmwq [1.803022281s] May 11 20:11:09.203: INFO: Created: latency-svc-djx46 May 11 20:11:09.214: INFO: Got endpoints: latency-svc-djx46 [1.826553423s] May 11 20:11:09.288: INFO: Created: latency-svc-vxh97 May 11 20:11:09.298: INFO: Got endpoints: latency-svc-vxh97 [1.834915668s] May 11 20:11:09.341: INFO: Created: latency-svc-wjjmg May 11 20:11:09.421: INFO: Got endpoints: latency-svc-wjjmg [1.858140646s] May 11 20:11:09.437: INFO: Created: latency-svc-tj55b May 11 20:11:09.481: INFO: Got endpoints: latency-svc-tj55b [1.85856921s] May 11 20:11:09.482: INFO: Created: latency-svc-dtvdb May 11 20:11:09.492: INFO: Got endpoints: latency-svc-dtvdb [1.803781608s] May 11 20:11:09.593: INFO: Created: latency-svc-4xdwc May 11 20:11:09.611: INFO: Got endpoints: latency-svc-4xdwc [1.879606012s] May 11 20:11:09.648: INFO: Created: latency-svc-vxphq May 11 20:11:09.665: INFO: Got endpoints: latency-svc-vxphq [1.839383687s] May 11 20:11:09.740: INFO: Created: latency-svc-7tsqr May 11 20:11:09.750: INFO: Got endpoints: latency-svc-7tsqr [1.861306911s] May 11 20:11:09.796: INFO: Created: latency-svc-sfkph May 11 20:11:09.810: INFO: Got endpoints: latency-svc-sfkph [1.850356129s] May 11 20:11:09.889: INFO: Created: latency-svc-t4ckh May 11 20:11:09.925: INFO: Got endpoints: latency-svc-t4ckh [1.706307782s] May 11 20:11:09.967: INFO: Created: latency-svc-pnswv May 11 20:11:10.060: INFO: Got endpoints: latency-svc-pnswv [1.751544171s] May 11 20:11:10.061: INFO: Created: latency-svc-t8lwj May 11 20:11:10.080: INFO: Got endpoints: latency-svc-t8lwj [1.525461181s] May 11 20:11:10.128: INFO: Created: latency-svc-5p5kp May 11 20:11:10.141: INFO: Got endpoints: latency-svc-5p5kp [1.454679638s] May 11 20:11:10.235: INFO: Created: latency-svc-z48m2 May 11 20:11:10.238: INFO: Got endpoints: latency-svc-z48m2 [1.287479297s] May 11 20:11:10.265: INFO: Created: latency-svc-nzq2g May 11 20:11:10.279: INFO: Got endpoints: latency-svc-nzq2g [1.111357904s] May 11 20:11:10.396: INFO: Created: latency-svc-x24jg May 11 20:11:10.399: INFO: Got endpoints: latency-svc-x24jg [1.184699216s] May 11 20:11:10.428: INFO: Created: latency-svc-spzvv May 11 20:11:10.457: INFO: Got endpoints: latency-svc-spzvv [1.158861886s] May 11 20:11:10.598: INFO: Created: latency-svc-7flcg May 11 20:11:10.627: INFO: Got endpoints: latency-svc-7flcg [1.205364659s] May 11 20:11:10.627: INFO: Created: latency-svc-2qf58 May 11 20:11:10.654: INFO: Got endpoints: latency-svc-2qf58 [1.173031325s] May 11 20:11:10.757: INFO: Created: latency-svc-g2fhp May 11 20:11:10.772: INFO: Got endpoints: latency-svc-g2fhp [1.279893674s] May 11 20:11:10.794: INFO: Created: latency-svc-c9jp2 May 11 20:11:10.808: INFO: Got endpoints: latency-svc-c9jp2 [1.196881286s] May 11 20:11:10.880: INFO: Created: latency-svc-2pwtd May 11 20:11:10.883: INFO: Got endpoints: latency-svc-2pwtd [1.217938847s] May 11 20:11:11.203: INFO: Created: latency-svc-p9q2k May 11 20:11:11.389: INFO: Got endpoints: latency-svc-p9q2k [1.638592424s] May 11 20:11:11.545: INFO: Created: latency-svc-s5pbh May 11 20:11:11.583: INFO: Got endpoints: latency-svc-s5pbh [1.772424057s] May 11 20:11:12.061: INFO: Created: latency-svc-n84th May 11 20:11:12.102: INFO: Got endpoints: latency-svc-n84th [2.176593498s] May 11 20:11:12.229: INFO: Created: latency-svc-v8ns4 May 11 20:11:12.258: INFO: Got endpoints: latency-svc-v8ns4 [2.198184497s] May 11 20:11:12.326: INFO: Created: latency-svc-hhhc6 May 11 20:11:12.516: INFO: Got endpoints: latency-svc-hhhc6 [2.436423815s] May 11 20:11:12.606: INFO: Created: latency-svc-4f75c May 11 20:11:12.686: INFO: Got endpoints: latency-svc-4f75c [2.545263796s] May 11 20:11:12.856: INFO: Created: latency-svc-zs9s8 May 11 20:11:12.861: INFO: Got endpoints: latency-svc-zs9s8 [2.623217891s] May 11 20:11:12.924: INFO: Created: latency-svc-drctc May 11 20:11:12.951: INFO: Got endpoints: latency-svc-drctc [2.671862273s] May 11 20:11:13.038: INFO: Created: latency-svc-4cn9h May 11 20:11:13.088: INFO: Got endpoints: latency-svc-4cn9h [2.688842484s] May 11 20:11:13.118: INFO: Created: latency-svc-sj8zr May 11 20:11:13.216: INFO: Got endpoints: latency-svc-sj8zr [2.758693514s] May 11 20:11:13.254: INFO: Created: latency-svc-z5k7d May 11 20:11:13.263: INFO: Got endpoints: latency-svc-z5k7d [2.636493452s] May 11 20:11:13.298: INFO: Created: latency-svc-qjlt8 May 11 20:11:13.311: INFO: Got endpoints: latency-svc-qjlt8 [2.657214618s] May 11 20:11:13.419: INFO: Created: latency-svc-cqfw4 May 11 20:11:13.443: INFO: Got endpoints: latency-svc-cqfw4 [2.670543216s] May 11 20:11:13.618: INFO: Created: latency-svc-nkrj6 May 11 20:11:13.629: INFO: Got endpoints: latency-svc-nkrj6 [2.820853485s] May 11 20:11:13.663: INFO: Created: latency-svc-dcrhn May 11 20:11:13.679: INFO: Got endpoints: latency-svc-dcrhn [2.7951299s] May 11 20:11:13.780: INFO: Created: latency-svc-75cgk May 11 20:11:13.802: INFO: Got endpoints: latency-svc-75cgk [2.413751396s] May 11 20:11:13.848: INFO: Created: latency-svc-5k4zv May 11 20:11:13.946: INFO: Got endpoints: latency-svc-5k4zv [2.363622014s] May 11 20:11:14.001: INFO: Created: latency-svc-bq2d4 May 11 20:11:14.037: INFO: Got endpoints: latency-svc-bq2d4 [1.935197196s] May 11 20:11:14.110: INFO: Created: latency-svc-fn9v9 May 11 20:11:14.117: INFO: Got endpoints: latency-svc-fn9v9 [1.859030503s] May 11 20:11:14.144: INFO: Created: latency-svc-rprt2 May 11 20:11:14.159: INFO: Got endpoints: latency-svc-rprt2 [1.642496615s] May 11 20:11:14.275: INFO: Created: latency-svc-jfwn5 May 11 20:11:14.278: INFO: Got endpoints: latency-svc-jfwn5 [1.591291037s] May 11 20:11:14.331: INFO: Created: latency-svc-rnl7q May 11 20:11:14.345: INFO: Got endpoints: latency-svc-rnl7q [1.483768203s] May 11 20:11:14.366: INFO: Created: latency-svc-cml6f May 11 20:11:14.419: INFO: Got endpoints: latency-svc-cml6f [1.468146358s] May 11 20:11:14.503: INFO: Created: latency-svc-qwpgp May 11 20:11:14.586: INFO: Got endpoints: latency-svc-qwpgp [1.498413969s] May 11 20:11:14.653: INFO: Created: latency-svc-qzghf May 11 20:11:14.681: INFO: Got endpoints: latency-svc-qzghf [1.465469373s] May 11 20:11:14.742: INFO: Created: latency-svc-72p6x May 11 20:11:14.754: INFO: Got endpoints: latency-svc-72p6x [1.490273175s] May 11 20:11:14.817: INFO: Created: latency-svc-xzgk9 May 11 20:11:14.832: INFO: Got endpoints: latency-svc-xzgk9 [1.520749805s] May 11 20:11:14.899: INFO: Created: latency-svc-zzlpl May 11 20:11:14.910: INFO: Got endpoints: latency-svc-zzlpl [1.467300256s] May 11 20:11:14.929: INFO: Created: latency-svc-9llmf May 11 20:11:14.942: INFO: Got endpoints: latency-svc-9llmf [1.31250468s] May 11 20:11:14.961: INFO: Created: latency-svc-7fctx May 11 20:11:14.978: INFO: Got endpoints: latency-svc-7fctx [1.299301528s] May 11 20:11:15.032: INFO: Created: latency-svc-47l5s May 11 20:11:15.039: INFO: Got endpoints: latency-svc-47l5s [1.236596796s] May 11 20:11:15.067: INFO: Created: latency-svc-6c78d May 11 20:11:15.080: INFO: Got endpoints: latency-svc-6c78d [1.133420647s] May 11 20:11:15.129: INFO: Created: latency-svc-f87f7 May 11 20:11:15.209: INFO: Got endpoints: latency-svc-f87f7 [1.171726996s] May 11 20:11:15.238: INFO: Created: latency-svc-v4c4m May 11 20:11:15.261: INFO: Got endpoints: latency-svc-v4c4m [1.144117392s] May 11 20:11:15.301: INFO: Created: latency-svc-t6n8s May 11 20:11:15.389: INFO: Got endpoints: latency-svc-t6n8s [1.230352523s] May 11 20:11:15.412: INFO: Created: latency-svc-wwcwh May 11 20:11:15.422: INFO: Got endpoints: latency-svc-wwcwh [1.144850937s] May 11 20:11:15.439: INFO: Created: latency-svc-6xvnh May 11 20:11:15.453: INFO: Got endpoints: latency-svc-6xvnh [1.108347651s] May 11 20:11:15.899: INFO: Created: latency-svc-8xtlg May 11 20:11:15.910: INFO: Got endpoints: latency-svc-8xtlg [1.490317549s] May 11 20:11:15.938: INFO: Created: latency-svc-2bthh May 11 20:11:15.951: INFO: Got endpoints: latency-svc-2bthh [1.36404117s] May 11 20:11:15.987: INFO: Created: latency-svc-zsx9s May 11 20:11:16.085: INFO: Got endpoints: latency-svc-zsx9s [1.403655849s] May 11 20:11:16.086: INFO: Created: latency-svc-hc84f May 11 20:11:16.095: INFO: Got endpoints: latency-svc-hc84f [1.34187157s] May 11 20:11:16.160: INFO: Created: latency-svc-kr5zv May 11 20:11:16.299: INFO: Got endpoints: latency-svc-kr5zv [1.467123335s] May 11 20:11:16.354: INFO: Created: latency-svc-qdx9f May 11 20:11:16.366: INFO: Got endpoints: latency-svc-qdx9f [1.456285504s] May 11 20:11:16.494: INFO: Created: latency-svc-bqm4r May 11 20:11:16.503: INFO: Got endpoints: latency-svc-bqm4r [1.561096672s] May 11 20:11:16.575: INFO: Created: latency-svc-nwp5m May 11 20:11:16.588: INFO: Got endpoints: latency-svc-nwp5m [1.610305768s] May 11 20:11:16.642: INFO: Created: latency-svc-788xk May 11 20:11:16.669: INFO: Got endpoints: latency-svc-788xk [1.630148126s] May 11 20:11:16.713: INFO: Created: latency-svc-5sgws May 11 20:11:16.727: INFO: Got endpoints: latency-svc-5sgws [1.64779238s] May 11 20:11:16.843: INFO: Created: latency-svc-5jx7d May 11 20:11:16.865: INFO: Got endpoints: latency-svc-5jx7d [1.655600533s] May 11 20:11:16.970: INFO: Created: latency-svc-sjqgx May 11 20:11:16.983: INFO: Got endpoints: latency-svc-sjqgx [1.721606181s] May 11 20:11:17.013: INFO: Created: latency-svc-66b6n May 11 20:11:17.026: INFO: Got endpoints: latency-svc-66b6n [1.637306959s] May 11 20:11:17.048: INFO: Created: latency-svc-h46cv May 11 20:11:17.063: INFO: Got endpoints: latency-svc-h46cv [1.640864829s] May 11 20:11:17.120: INFO: Created: latency-svc-kdpzz May 11 20:11:17.129: INFO: Got endpoints: latency-svc-kdpzz [1.67630017s] May 11 20:11:17.151: INFO: Created: latency-svc-f9zrg May 11 20:11:17.166: INFO: Got endpoints: latency-svc-f9zrg [1.256054053s] May 11 20:11:17.187: INFO: Created: latency-svc-n89wz May 11 20:11:17.196: INFO: Got endpoints: latency-svc-n89wz [1.245235637s] May 11 20:11:17.217: INFO: Created: latency-svc-w7czp May 11 20:11:17.288: INFO: Got endpoints: latency-svc-w7czp [1.202759188s] May 11 20:11:17.299: INFO: Created: latency-svc-kd5dj May 11 20:11:17.318: INFO: Created: latency-svc-rfft4 May 11 20:11:17.318: INFO: Got endpoints: latency-svc-kd5dj [1.222909898s] May 11 20:11:17.349: INFO: Got endpoints: latency-svc-rfft4 [1.049399326s] May 11 20:11:17.379: INFO: Created: latency-svc-bj27c May 11 20:11:17.450: INFO: Got endpoints: latency-svc-bj27c [1.083322612s] May 11 20:11:17.454: INFO: Created: latency-svc-gtsh9 May 11 20:11:17.480: INFO: Got endpoints: latency-svc-gtsh9 [977.103923ms] May 11 20:11:17.481: INFO: Created: latency-svc-jwszw May 11 20:11:17.492: INFO: Got endpoints: latency-svc-jwszw [903.651666ms] May 11 20:11:17.509: INFO: Created: latency-svc-hkx7q May 11 20:11:17.522: INFO: Got endpoints: latency-svc-hkx7q [852.45825ms] May 11 20:11:17.545: INFO: Created: latency-svc-d8qqp May 11 20:11:17.611: INFO: Got endpoints: latency-svc-d8qqp [883.292139ms] May 11 20:11:17.613: INFO: Created: latency-svc-gxs2g May 11 20:11:17.619: INFO: Got endpoints: latency-svc-gxs2g [753.845669ms] May 11 20:11:17.650: INFO: Created: latency-svc-8s2cp May 11 20:11:17.667: INFO: Got endpoints: latency-svc-8s2cp [683.819613ms] May 11 20:11:17.690: INFO: Created: latency-svc-gzl76 May 11 20:11:17.703: INFO: Got endpoints: latency-svc-gzl76 [676.753363ms] May 11 20:11:17.755: INFO: Created: latency-svc-t2rgn May 11 20:11:17.791: INFO: Got endpoints: latency-svc-t2rgn [727.804326ms] May 11 20:11:17.830: INFO: Created: latency-svc-np4sv May 11 20:11:17.842: INFO: Got endpoints: latency-svc-np4sv [712.324137ms] May 11 20:11:17.917: INFO: Created: latency-svc-zbfd2 May 11 20:11:17.927: INFO: Got endpoints: latency-svc-zbfd2 [760.760716ms] May 11 20:11:17.961: INFO: Created: latency-svc-6sjdq May 11 20:11:17.986: INFO: Got endpoints: latency-svc-6sjdq [790.575007ms] May 11 20:11:18.054: INFO: Created: latency-svc-sz6zs May 11 20:11:18.058: INFO: Got endpoints: latency-svc-sz6zs [770.365482ms] May 11 20:11:18.093: INFO: Created: latency-svc-g6zgx May 11 20:11:18.129: INFO: Got endpoints: latency-svc-g6zgx [810.111403ms] May 11 20:11:18.258: INFO: Created: latency-svc-nzm7w May 11 20:11:18.295: INFO: Got endpoints: latency-svc-nzm7w [945.838828ms] May 11 20:11:18.298: INFO: Created: latency-svc-855wx May 11 20:11:18.339: INFO: Got endpoints: latency-svc-855wx [889.013742ms] May 11 20:11:18.407: INFO: Created: latency-svc-zk5ll May 11 20:11:18.411: INFO: Got endpoints: latency-svc-zk5ll [930.78231ms] May 11 20:11:18.475: INFO: Created: latency-svc-pz8vp May 11 20:11:18.505: INFO: Got endpoints: latency-svc-pz8vp [1.012980259s] May 11 20:11:18.648: INFO: Created: latency-svc-pml7t May 11 20:11:18.651: INFO: Got endpoints: latency-svc-pml7t [1.129758175s] May 11 20:11:18.907: INFO: Created: latency-svc-7b8wx May 11 20:11:18.998: INFO: Got endpoints: latency-svc-7b8wx [1.386680848s] May 11 20:11:19.225: INFO: Created: latency-svc-7pk5h May 11 20:11:19.238: INFO: Got endpoints: latency-svc-7pk5h [1.619822929s] May 11 20:11:19.287: INFO: Created: latency-svc-b6pfv May 11 20:11:19.303: INFO: Got endpoints: latency-svc-b6pfv [1.636643249s] May 11 20:11:19.365: INFO: Created: latency-svc-k6q5g May 11 20:11:19.387: INFO: Got endpoints: latency-svc-k6q5g [1.683850492s] May 11 20:11:19.423: INFO: Created: latency-svc-2cc72 May 11 20:11:19.436: INFO: Got endpoints: latency-svc-2cc72 [1.644453331s] May 11 20:11:19.455: INFO: Created: latency-svc-r7h94 May 11 20:11:19.527: INFO: Got endpoints: latency-svc-r7h94 [1.684873482s] May 11 20:11:19.549: INFO: Created: latency-svc-lgbcc May 11 20:11:19.569: INFO: Got endpoints: latency-svc-lgbcc [1.642558197s] May 11 20:11:19.682: INFO: Created: latency-svc-59rc6 May 11 20:11:19.689: INFO: Got endpoints: latency-svc-59rc6 [1.702345717s] May 11 20:11:19.713: INFO: Created: latency-svc-57tcv May 11 20:11:19.726: INFO: Got endpoints: latency-svc-57tcv [1.667258983s] May 11 20:11:19.751: INFO: Created: latency-svc-ngtsp May 11 20:11:19.761: INFO: Got endpoints: latency-svc-ngtsp [1.632485277s] May 11 20:11:19.833: INFO: Created: latency-svc-b4kql May 11 20:11:19.861: INFO: Got endpoints: latency-svc-b4kql [1.566232594s] May 11 20:11:19.862: INFO: Created: latency-svc-gz5gj May 11 20:11:19.887: INFO: Got endpoints: latency-svc-gz5gj [1.548064853s] May 11 20:11:20.002: INFO: Created: latency-svc-bshtb May 11 20:11:20.023: INFO: Got endpoints: latency-svc-bshtb [1.612738471s] May 11 20:11:20.071: INFO: Created: latency-svc-n4nqj May 11 20:11:20.087: INFO: Got endpoints: latency-svc-n4nqj [1.582064683s] May 11 20:11:20.144: INFO: Created: latency-svc-wbltc May 11 20:11:20.152: INFO: Got endpoints: latency-svc-wbltc [1.50088052s] May 11 20:11:20.181: INFO: Created: latency-svc-cl7nh May 11 20:11:20.195: INFO: Got endpoints: latency-svc-cl7nh [1.197318116s] May 11 20:11:20.293: INFO: Created: latency-svc-885cw May 11 20:11:20.296: INFO: Got endpoints: latency-svc-885cw [1.057458874s] May 11 20:11:20.324: INFO: Created: latency-svc-spqfj May 11 20:11:20.360: INFO: Got endpoints: latency-svc-spqfj [1.056903339s] May 11 20:11:20.458: INFO: Created: latency-svc-qnrx2 May 11 20:11:20.466: INFO: Got endpoints: latency-svc-qnrx2 [1.078353741s] May 11 20:11:20.485: INFO: Created: latency-svc-fm2s4 May 11 20:11:20.502: INFO: Got endpoints: latency-svc-fm2s4 [1.066046611s] May 11 20:11:20.527: INFO: Created: latency-svc-c4rp7 May 11 20:11:20.544: INFO: Got endpoints: latency-svc-c4rp7 [1.017110396s] May 11 20:11:20.631: INFO: Created: latency-svc-4txpv May 11 20:11:20.641: INFO: Got endpoints: latency-svc-4txpv [1.07152999s] May 11 20:11:20.683: INFO: Created: latency-svc-5d2gp May 11 20:11:20.772: INFO: Got endpoints: latency-svc-5d2gp [1.083016724s] May 11 20:11:20.811: INFO: Created: latency-svc-zjkk4 May 11 20:11:20.841: INFO: Got endpoints: latency-svc-zjkk4 [1.115514163s] May 11 20:11:20.964: INFO: Created: latency-svc-tqmgc May 11 20:11:20.997: INFO: Got endpoints: latency-svc-tqmgc [1.236169494s] May 11 20:11:20.999: INFO: Created: latency-svc-6x872 May 11 20:11:21.019: INFO: Got endpoints: latency-svc-6x872 [1.157654376s] May 11 20:11:21.039: INFO: Created: latency-svc-rd4bv May 11 20:11:21.054: INFO: Got endpoints: latency-svc-rd4bv [1.166750758s] May 11 20:11:21.185: INFO: Created: latency-svc-mgll4 May 11 20:11:21.210: INFO: Got endpoints: latency-svc-mgll4 [1.186084623s] May 11 20:11:21.243: INFO: Created: latency-svc-5fq8l May 11 20:11:21.258: INFO: Got endpoints: latency-svc-5fq8l [1.171166861s] May 11 20:11:21.329: INFO: Created: latency-svc-dnlwf May 11 20:11:21.342: INFO: Got endpoints: latency-svc-dnlwf [1.189386079s] May 11 20:11:21.361: INFO: Created: latency-svc-hf2fw May 11 20:11:21.384: INFO: Got endpoints: latency-svc-hf2fw [1.18949859s] May 11 20:11:21.491: INFO: Created: latency-svc-ztdmx May 11 20:11:21.494: INFO: Got endpoints: latency-svc-ztdmx [1.197512876s] May 11 20:11:21.553: INFO: Created: latency-svc-m8mwv May 11 20:11:21.583: INFO: Got endpoints: latency-svc-m8mwv [1.222862204s] May 11 20:11:21.663: INFO: Created: latency-svc-n2zrs May 11 20:11:21.703: INFO: Got endpoints: latency-svc-n2zrs [1.237864767s] May 11 20:11:21.800: INFO: Created: latency-svc-d44cv May 11 20:11:21.823: INFO: Got endpoints: latency-svc-d44cv [1.320993503s] May 11 20:11:21.993: INFO: Created: latency-svc-cwq47 May 11 20:11:22.034: INFO: Got endpoints: latency-svc-cwq47 [1.48969041s] May 11 20:11:22.168: INFO: Created: latency-svc-6dmpd May 11 20:11:22.207: INFO: Got endpoints: latency-svc-6dmpd [1.565950868s] May 11 20:11:22.207: INFO: Created: latency-svc-8cf6h May 11 20:11:22.227: INFO: Got endpoints: latency-svc-8cf6h [1.455202236s] May 11 20:11:22.264: INFO: Created: latency-svc-brcgr May 11 20:11:22.377: INFO: Got endpoints: latency-svc-brcgr [1.535584951s] May 11 20:11:22.385: INFO: Created: latency-svc-zbhvd May 11 20:11:22.425: INFO: Got endpoints: latency-svc-zbhvd [1.427642082s] May 11 20:11:22.460: INFO: Created: latency-svc-b5njl May 11 20:11:22.563: INFO: Got endpoints: latency-svc-b5njl [1.543947263s] May 11 20:11:22.604: INFO: Created: latency-svc-prl7f May 11 20:11:22.617: INFO: Got endpoints: latency-svc-prl7f [1.562789145s] May 11 20:11:22.724: INFO: Created: latency-svc-kjkhz May 11 20:11:22.731: INFO: Got endpoints: latency-svc-kjkhz [1.521857272s] May 11 20:11:22.760: INFO: Created: latency-svc-5wbqb May 11 20:11:22.779: INFO: Got endpoints: latency-svc-5wbqb [1.520982135s] May 11 20:11:22.810: INFO: Created: latency-svc-xgc25 May 11 20:11:22.886: INFO: Got endpoints: latency-svc-xgc25 [1.544003513s] May 11 20:11:22.888: INFO: Created: latency-svc-xtnxz May 11 20:11:22.915: INFO: Got endpoints: latency-svc-xtnxz [1.530910247s] May 11 20:11:22.952: INFO: Created: latency-svc-gmkbn May 11 20:11:22.966: INFO: Got endpoints: latency-svc-gmkbn [1.472385268s] May 11 20:11:23.030: INFO: Created: latency-svc-zvm8d May 11 20:11:23.056: INFO: Got endpoints: latency-svc-zvm8d [1.472908469s] May 11 20:11:23.098: INFO: Created: latency-svc-p6bqv May 11 20:11:23.111: INFO: Got endpoints: latency-svc-p6bqv [1.407465447s] May 11 20:11:23.174: INFO: Created: latency-svc-5rdnh May 11 20:11:23.180: INFO: Got endpoints: latency-svc-5rdnh [1.356847198s] May 11 20:11:23.204: INFO: Created: latency-svc-cllvk May 11 20:11:23.216: INFO: Got endpoints: latency-svc-cllvk [1.182547475s] May 11 20:11:23.239: INFO: Created: latency-svc-22tgk May 11 20:11:23.253: INFO: Got endpoints: latency-svc-22tgk [1.04564051s] May 11 20:11:23.335: INFO: Created: latency-svc-5dvdt May 11 20:11:23.343: INFO: Got endpoints: latency-svc-5dvdt [1.115376303s] May 11 20:11:23.391: INFO: Created: latency-svc-cmct7 May 11 20:11:23.403: INFO: Got endpoints: latency-svc-cmct7 [1.026416671s] May 11 20:11:23.421: INFO: Created: latency-svc-7v2j6 May 11 20:11:23.473: INFO: Got endpoints: latency-svc-7v2j6 [1.04737819s] May 11 20:11:23.510: INFO: Created: latency-svc-ptsr2 May 11 20:11:23.518: INFO: Got endpoints: latency-svc-ptsr2 [955.327256ms] May 11 20:11:23.559: INFO: Created: latency-svc-vxqb4 May 11 20:11:23.617: INFO: Got endpoints: latency-svc-vxqb4 [1.000827757s] May 11 20:11:23.648: INFO: Created: latency-svc-l7wjs May 11 20:11:23.672: INFO: Got endpoints: latency-svc-l7wjs [940.006115ms] May 11 20:11:23.767: INFO: Created: latency-svc-db8jk May 11 20:11:23.779: INFO: Got endpoints: latency-svc-db8jk [999.918462ms] May 11 20:11:23.799: INFO: Created: latency-svc-kbhq5 May 11 20:11:23.813: INFO: Got endpoints: latency-svc-kbhq5 [927.031337ms] May 11 20:11:23.940: INFO: Created: latency-svc-l7hfj May 11 20:11:23.944: INFO: Got endpoints: latency-svc-l7hfj [1.029005438s] May 11 20:11:23.973: INFO: Created: latency-svc-52zvf May 11 20:11:23.987: INFO: Got endpoints: latency-svc-52zvf [1.02134529s] May 11 20:11:24.034: INFO: Created: latency-svc-sx2cb May 11 20:11:24.095: INFO: Got endpoints: latency-svc-sx2cb [1.039046034s] May 11 20:11:24.110: INFO: Created: latency-svc-4c5sg May 11 20:11:24.126: INFO: Got endpoints: latency-svc-4c5sg [1.015135466s] May 11 20:11:24.146: INFO: Created: latency-svc-b876w May 11 20:11:24.157: INFO: Got endpoints: latency-svc-b876w [977.323025ms] May 11 20:11:24.175: INFO: Created: latency-svc-fzqs5 May 11 20:11:24.187: INFO: Got endpoints: latency-svc-fzqs5 [970.507558ms] May 11 20:11:24.239: INFO: Created: latency-svc-pjf8b May 11 20:11:24.256: INFO: Got endpoints: latency-svc-pjf8b [1.00315572s] May 11 20:11:24.285: INFO: Created: latency-svc-7xpg9 May 11 20:11:24.301: INFO: Got endpoints: latency-svc-7xpg9 [958.783332ms] May 11 20:11:24.319: INFO: Created: latency-svc-chmdp May 11 20:11:24.332: INFO: Got endpoints: latency-svc-chmdp [928.773348ms] May 11 20:11:24.389: INFO: Created: latency-svc-mpgkt May 11 20:11:24.398: INFO: Got endpoints: latency-svc-mpgkt [925.724654ms] May 11 20:11:24.442: INFO: Created: latency-svc-mp4d9 May 11 20:11:24.471: INFO: Got endpoints: latency-svc-mp4d9 [952.536201ms] May 11 20:11:24.569: INFO: Created: latency-svc-bx2s8 May 11 20:11:24.619: INFO: Got endpoints: latency-svc-bx2s8 [1.001908977s] May 11 20:11:24.620: INFO: Created: latency-svc-92fd4 May 11 20:11:24.791: INFO: Got endpoints: latency-svc-92fd4 [1.119234787s] May 11 20:11:25.067: INFO: Created: latency-svc-wg6tf May 11 20:11:25.067: INFO: Got endpoints: latency-svc-wg6tf [1.287906892s] May 11 20:11:25.275: INFO: Created: latency-svc-gpgtt May 11 20:11:25.307: INFO: Got endpoints: latency-svc-gpgtt [1.493951116s] May 11 20:11:25.328: INFO: Created: latency-svc-gw45x May 11 20:11:25.343: INFO: Got endpoints: latency-svc-gw45x [1.398031487s] May 11 20:11:25.364: INFO: Created: latency-svc-w4lgc May 11 20:11:25.461: INFO: Got endpoints: latency-svc-w4lgc [1.473791337s] May 11 20:11:25.463: INFO: Created: latency-svc-mwcwd May 11 20:11:25.469: INFO: Got endpoints: latency-svc-mwcwd [1.37393328s] May 11 20:11:25.495: INFO: Created: latency-svc-85gqn May 11 20:11:25.523: INFO: Got endpoints: latency-svc-85gqn [1.397204469s] May 11 20:11:25.629: INFO: Created: latency-svc-4q67t May 11 20:11:25.632: INFO: Got endpoints: latency-svc-4q67t [1.474531174s] May 11 20:11:25.632: INFO: Latencies: [100.630279ms 114.028295ms 151.901896ms 207.942195ms 234.392127ms 264.465681ms 345.911923ms 368.860375ms 444.479147ms 544.065574ms 603.441951ms 669.32864ms 676.753363ms 683.819613ms 712.324137ms 712.506366ms 727.804326ms 753.845669ms 760.760716ms 770.365482ms 790.575007ms 807.078048ms 810.111403ms 839.984114ms 852.45825ms 869.701664ms 883.292139ms 889.013742ms 903.651666ms 925.724654ms 927.031337ms 928.773348ms 930.78231ms 940.006115ms 945.838828ms 952.536201ms 955.327256ms 958.783332ms 970.507558ms 977.103923ms 977.323025ms 999.918462ms 1.000827757s 1.001908977s 1.00315572s 1.012980259s 1.015135466s 1.017110396s 1.02134529s 1.026416671s 1.029005438s 1.039046034s 1.04564051s 1.04737819s 1.049399326s 1.056903339s 1.057458874s 1.066046611s 1.07152999s 1.078353741s 1.083016724s 1.083322612s 1.085963591s 1.108347651s 1.111357904s 1.115376303s 1.115514163s 1.119234787s 1.129758175s 1.133420647s 1.137328s 1.144117392s 1.144850937s 1.157654376s 1.158861886s 1.166750758s 1.171166861s 1.171726996s 1.173031325s 1.182547475s 1.184699216s 1.186084623s 1.189386079s 1.18949859s 1.196881286s 1.197318116s 1.197512876s 1.202759188s 1.205364659s 1.217938847s 1.222862204s 1.222909898s 1.230352523s 1.236169494s 1.236596796s 1.237864767s 1.245235637s 1.256054053s 1.279893674s 1.287479297s 1.287906892s 1.299301528s 1.31250468s 1.320993503s 1.327251038s 1.34187157s 1.356847198s 1.36404117s 1.37393328s 1.386680848s 1.397204469s 1.398031487s 1.403655849s 1.407465447s 1.427642082s 1.432660908s 1.454679638s 1.455202236s 1.456285504s 1.465469373s 1.467123335s 1.467300256s 1.468146358s 1.472385268s 1.472908469s 1.473791337s 1.474531174s 1.483768203s 1.48969041s 1.490273175s 1.490317549s 1.493951116s 1.498413969s 1.50088052s 1.520749805s 1.520982135s 1.521857272s 1.525461181s 1.530910247s 1.535584951s 1.543947263s 1.544003513s 1.548064853s 1.561096672s 1.562789145s 1.565950868s 1.566232594s 1.582064683s 1.591291037s 1.610305768s 1.612738471s 1.619822929s 1.630148126s 1.632485277s 1.636643249s 1.637306959s 1.638592424s 1.640864829s 1.642496615s 1.642558197s 1.644453331s 1.64779238s 1.655600533s 1.666819586s 1.667258983s 1.67630017s 1.683850492s 1.684873482s 1.702345717s 1.706307782s 1.721606181s 1.751544171s 1.772424057s 1.803022281s 1.803781608s 1.826553423s 1.834915668s 1.839383687s 1.850356129s 1.858140646s 1.85856921s 1.859030503s 1.861306911s 1.879606012s 1.935197196s 2.176593498s 2.198184497s 2.363622014s 2.413751396s 2.436423815s 2.545263796s 2.623217891s 2.636493452s 2.657214618s 2.670543216s 2.671862273s 2.688842484s 2.758693514s 2.7951299s 2.820853485s] May 11 20:11:25.632: INFO: 50 %ile: 1.287906892s May 11 20:11:25.632: INFO: 90 %ile: 1.85856921s May 11 20:11:25.632: INFO: 99 %ile: 2.7951299s May 11 20:11:25.632: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:11:25.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7112" for this suite. • [SLOW TEST:23.044 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":230,"skipped":3730,"failed":0} SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:11:25.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-b56b67dc-0a64-44d9-91e9-618fe3c59e84 STEP: Creating secret with name secret-projected-all-test-volume-97aa4780-f6b9-4264-9ae9-bf0528ee11a4 STEP: Creating a pod to test Check all projections for projected volume plugin May 11 20:11:26.137: INFO: Waiting up to 5m0s for pod "projected-volume-cf4db04c-4074-4869-9f37-0acbf298f9da" in namespace "projected-5040" to be "Succeeded or Failed" May 11 20:11:26.170: INFO: Pod "projected-volume-cf4db04c-4074-4869-9f37-0acbf298f9da": Phase="Pending", Reason="", readiness=false. Elapsed: 32.905132ms May 11 20:11:28.383: INFO: Pod "projected-volume-cf4db04c-4074-4869-9f37-0acbf298f9da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245939494s May 11 20:11:30.387: INFO: Pod "projected-volume-cf4db04c-4074-4869-9f37-0acbf298f9da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.250302873s STEP: Saw pod success May 11 20:11:30.387: INFO: Pod "projected-volume-cf4db04c-4074-4869-9f37-0acbf298f9da" satisfied condition "Succeeded or Failed" May 11 20:11:30.391: INFO: Trying to get logs from node latest-worker2 pod projected-volume-cf4db04c-4074-4869-9f37-0acbf298f9da container projected-all-volume-test: STEP: delete the pod May 11 20:11:30.496: INFO: Waiting for pod projected-volume-cf4db04c-4074-4869-9f37-0acbf298f9da to disappear May 11 20:11:30.522: INFO: Pod projected-volume-cf4db04c-4074-4869-9f37-0acbf298f9da no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:11:30.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5040" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3733,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:11:30.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-5fd3cafd-0da3-411c-a484-0c485061033b STEP: Creating a pod to test consume secrets May 11 20:11:31.008: INFO: Waiting up to 5m0s for pod "pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b" in namespace "secrets-1338" to be "Succeeded or Failed" May 11 20:11:31.084: INFO: Pod "pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b": Phase="Pending", Reason="", readiness=false. Elapsed: 75.733748ms May 11 20:11:33.263: INFO: Pod "pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254832188s May 11 20:11:35.330: INFO: Pod "pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b": Phase="Running", Reason="", readiness=true. Elapsed: 4.321329205s May 11 20:11:37.629: INFO: Pod "pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.620982078s STEP: Saw pod success May 11 20:11:37.629: INFO: Pod "pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b" satisfied condition "Succeeded or Failed" May 11 20:11:37.654: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b container secret-env-test: STEP: delete the pod May 11 20:11:37.810: INFO: Waiting for pod pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b to disappear May 11 20:11:37.815: INFO: Pod pod-secrets-7b695fa7-f63d-4ce2-bea6-4140a24da80b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:11:37.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1338" for this suite. • [SLOW TEST:7.063 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3766,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:11:37.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 20:11:38.030: INFO: Waiting up to 5m0s for pod "pod-4da6361e-c743-4ca7-8710-a1fb96fbc715" in namespace "emptydir-9573" to be "Succeeded or Failed" May 11 20:11:38.091: INFO: Pod "pod-4da6361e-c743-4ca7-8710-a1fb96fbc715": Phase="Pending", Reason="", readiness=false. Elapsed: 61.07162ms May 11 20:11:40.110: INFO: Pod "pod-4da6361e-c743-4ca7-8710-a1fb96fbc715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079904832s May 11 20:11:42.175: INFO: Pod "pod-4da6361e-c743-4ca7-8710-a1fb96fbc715": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145083313s May 11 20:11:44.331: INFO: Pod "pod-4da6361e-c743-4ca7-8710-a1fb96fbc715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.300369613s STEP: Saw pod success May 11 20:11:44.331: INFO: Pod "pod-4da6361e-c743-4ca7-8710-a1fb96fbc715" satisfied condition "Succeeded or Failed" May 11 20:11:44.373: INFO: Trying to get logs from node latest-worker2 pod pod-4da6361e-c743-4ca7-8710-a1fb96fbc715 container test-container: STEP: delete the pod May 11 20:11:45.273: INFO: Waiting for pod pod-4da6361e-c743-4ca7-8710-a1fb96fbc715 to disappear May 11 20:11:45.326: INFO: Pod pod-4da6361e-c743-4ca7-8710-a1fb96fbc715 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:11:45.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9573" for this suite. • [SLOW TEST:7.527 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":233,"skipped":3772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:11:45.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 20:11:45.727: INFO: Waiting up to 5m0s for pod "pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7" in namespace "emptydir-8532" to be "Succeeded or Failed" May 11 20:11:45.780: INFO: Pod "pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.714187ms May 11 20:11:47.892: INFO: Pod "pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1650218s May 11 20:11:49.910: INFO: Pod "pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.182603155s May 11 20:11:51.934: INFO: Pod "pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206657875s STEP: Saw pod success May 11 20:11:51.934: INFO: Pod "pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7" satisfied condition "Succeeded or Failed" May 11 20:11:51.937: INFO: Trying to get logs from node latest-worker2 pod pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7 container test-container: STEP: delete the pod May 11 20:11:52.108: INFO: Waiting for pod pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7 to disappear May 11 20:11:52.126: INFO: Pod pod-8b1be164-33ce-4969-bbc9-cb53757ac4d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:11:52.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8532" for this suite. • [SLOW TEST:6.806 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":234,"skipped":3798,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:11:52.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:11:52.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9278" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":235,"skipped":3819,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:11:52.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-336b0aaf-6f8c-49fc-ae4f-baa32e0d866d in namespace container-probe-9540 May 11 20:11:58.796: INFO: Started pod test-webserver-336b0aaf-6f8c-49fc-ae4f-baa32e0d866d in namespace container-probe-9540 STEP: checking the pod's current state and verifying that restartCount is present May 11 20:11:58.811: INFO: Initial restart count of pod test-webserver-336b0aaf-6f8c-49fc-ae4f-baa32e0d866d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:16:00.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9540" for this suite. • [SLOW TEST:248.180 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3838,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:16:00.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 20:16:01.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59467493-e8b5-41fa-a262-82eab40c5ba2" in namespace "projected-700" to be "Succeeded or Failed" May 11 20:16:01.249: INFO: Pod "downwardapi-volume-59467493-e8b5-41fa-a262-82eab40c5ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 123.519951ms May 11 20:16:03.262: INFO: Pod "downwardapi-volume-59467493-e8b5-41fa-a262-82eab40c5ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136507455s May 11 20:16:05.266: INFO: Pod "downwardapi-volume-59467493-e8b5-41fa-a262-82eab40c5ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140245624s STEP: Saw pod success May 11 20:16:05.266: INFO: Pod "downwardapi-volume-59467493-e8b5-41fa-a262-82eab40c5ba2" satisfied condition "Succeeded or Failed" May 11 20:16:05.268: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-59467493-e8b5-41fa-a262-82eab40c5ba2 container client-container: STEP: delete the pod May 11 20:16:05.411: INFO: Waiting for pod downwardapi-volume-59467493-e8b5-41fa-a262-82eab40c5ba2 to disappear May 11 20:16:05.417: INFO: Pod downwardapi-volume-59467493-e8b5-41fa-a262-82eab40c5ba2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:16:05.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-700" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3839,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:16:05.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 11 20:16:05.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2977' May 11 20:16:10.772: INFO: stderr: "" May 11 20:16:10.772: INFO: stdout: "pod/pause created\n" May 11 20:16:10.772: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 11 20:16:10.772: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2977" to be "running and ready" May 11 20:16:10.813: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 41.349629ms May 11 20:16:12.848: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075816481s May 11 20:16:14.864: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.092615626s May 11 20:16:14.864: INFO: Pod "pause" satisfied condition "running and ready" May 11 20:16:14.864: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 11 20:16:14.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2977' May 11 20:16:14.970: INFO: stderr: "" May 11 20:16:14.970: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 11 20:16:14.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2977' May 11 20:16:15.080: INFO: stderr: "" May 11 20:16:15.080: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 11 20:16:15.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2977' May 11 20:16:15.181: INFO: stderr: "" May 11 20:16:15.181: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 11 20:16:15.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2977' May 11 20:16:15.271: INFO: stderr: "" May 11 20:16:15.271: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 11 20:16:15.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2977' May 11 20:16:15.443: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 20:16:15.443: INFO: stdout: "pod \"pause\" force deleted\n" May 11 20:16:15.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2977' May 11 20:16:15.822: INFO: stderr: "No resources found in kubectl-2977 namespace.\n" May 11 20:16:15.822: INFO: stdout: "" May 11 20:16:15.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2977 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 20:16:15.967: INFO: stderr: "" May 11 20:16:15.967: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:16:15.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2977" for this suite. • [SLOW TEST:10.558 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":238,"skipped":3842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:16:15.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:16:16.482: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5c176755-e89b-4758-9dc2-3111246d9735", Controller:(*bool)(0xc004485ba2), BlockOwnerDeletion:(*bool)(0xc004485ba3)}} May 11 20:16:16.564: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b0c730c6-4c8e-4122-81d2-d1e1ad4ae8e0", Controller:(*bool)(0xc004485f7a), BlockOwnerDeletion:(*bool)(0xc004485f7b)}} May 11 20:16:16.722: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d5f4cf0e-4305-4859-9cdf-7098a2b03396", Controller:(*bool)(0xc0044ac16a), BlockOwnerDeletion:(*bool)(0xc0044ac16b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:16:21.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9434" for this suite. • [SLOW TEST:5.831 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":239,"skipped":3883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:16:21.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:16:23.101: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:16:25.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724824983, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724824983, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724824983, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724824983, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:16:27.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724824983, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724824983, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724824983, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724824983, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:16:30.143: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:16:30.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7420" for this suite. STEP: Destroying namespace "webhook-7420-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.648 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":240,"skipped":3914,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:16:30.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5437.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5437.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5437.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5437.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5437.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5437.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 20:16:36.648: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:36.651: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:36.655: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:36.657: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:36.664: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:36.670: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:36.672: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:36.690: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:36.695: INFO: Lookups using dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local] May 11 20:16:41.699: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:41.701: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:41.703: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:41.705: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:41.710: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:41.712: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:41.714: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:41.715: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:41.719: INFO: Lookups using dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local] May 11 20:16:46.699: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:46.701: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:46.704: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:46.706: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:46.712: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:46.714: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:46.716: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:46.718: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:46.722: INFO: Lookups using dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local] May 11 20:16:51.830: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:51.890: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:51.892: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:52.021: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:52.030: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:52.032: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:52.035: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:52.038: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:52.043: INFO: Lookups using dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local] May 11 20:16:56.956: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:56.961: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:57.007: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:57.037: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:57.403: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:57.405: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:57.414: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:57.418: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:16:57.503: INFO: Lookups using dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local] May 11 20:17:01.699: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:17:01.702: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:17:01.705: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:17:01.708: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:17:01.714: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:17:01.716: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:17:01.720: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:17:01.723: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local from pod dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082: the server could not find the requested resource (get pods dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082) May 11 20:17:01.728: INFO: Lookups using dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5437.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5437.svc.cluster.local jessie_udp@dns-test-service-2.dns-5437.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5437.svc.cluster.local] May 11 20:17:06.724: INFO: DNS probes using dns-5437/dns-test-4482993b-52fe-45c9-9d0d-a9a2bcf30082 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:17:06.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5437" for this suite. • [SLOW TEST:37.977 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":241,"skipped":3921,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:17:08.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 11 20:17:11.089: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 11 20:17:13.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825031, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825031, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825031, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825031, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:17:15.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825031, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825031, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825031, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825031, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:17:18.135: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:17:18.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:17:23.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8803" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:14.796 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":242,"skipped":3922,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:17:23.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-322b5b89-3f37-4d82-9f2b-46988f70c6ef in namespace container-probe-1594 May 11 20:17:36.464: INFO: Started pod busybox-322b5b89-3f37-4d82-9f2b-46988f70c6ef in namespace container-probe-1594 STEP: checking the pod's current state and verifying that restartCount is present May 11 20:17:36.467: INFO: Initial restart count of pod busybox-322b5b89-3f37-4d82-9f2b-46988f70c6ef is 0 May 11 20:18:28.509: INFO: Restart count of pod container-probe-1594/busybox-322b5b89-3f37-4d82-9f2b-46988f70c6ef is now 1 (52.042327955s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:18:29.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1594" for this suite. • [SLOW TEST:65.917 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":3923,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:18:29.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:18:30.172: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:18:34.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825110, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825110, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825110, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825110, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:18:36.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825110, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825110, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825110, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825110, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:18:39.534: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:18:52.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2574" for this suite. STEP: Destroying namespace "webhook-2574-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.610 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":244,"skipped":3930,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:18:54.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-17970b61-7332-47a1-aa82-6f6cc47a20ec STEP: Creating a pod to test consume configMaps May 11 20:18:59.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1" in namespace "projected-5223" to be "Succeeded or Failed" May 11 20:19:00.490: INFO: Pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.395719627s May 11 20:19:02.574: INFO: Pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479811195s May 11 20:19:06.194: INFO: Pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.0997016s May 11 20:19:09.287: INFO: Pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192278336s May 11 20:19:11.301: INFO: Pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206412629s May 11 20:19:14.040: INFO: Pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.945713737s May 11 20:19:16.174: INFO: Pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.079235012s STEP: Saw pod success May 11 20:19:16.174: INFO: Pod "pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1" satisfied condition "Succeeded or Failed" May 11 20:19:16.504: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1 container projected-configmap-volume-test: STEP: delete the pod May 11 20:19:17.039: INFO: Waiting for pod pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1 to disappear May 11 20:19:17.071: INFO: Pod pod-projected-configmaps-93f51810-1528-4ea1-ac51-d425e022bca1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:19:17.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5223" for this suite. • [SLOW TEST:22.402 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":245,"skipped":3943,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:19:17.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-622 STEP: creating service affinity-clusterip-transition in namespace services-622 STEP: creating replication controller affinity-clusterip-transition in namespace services-622 I0511 20:19:17.567753 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-622, replica count: 3 I0511 20:19:20.618132 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:19:23.618383 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:19:26.618586 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:19:29.618845 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:19:32.619065 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:19:35.619302 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:19:38.621454 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:19:41.621720 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 20:19:42.208: INFO: Creating new exec pod May 11 20:19:50.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-622 execpod-affinityvvj4g -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 11 20:19:51.265: INFO: stderr: "I0511 20:19:51.176953 4025 log.go:172] (0xc000661340) (0xc000b7e280) Create stream\nI0511 20:19:51.177012 4025 log.go:172] (0xc000661340) (0xc000b7e280) Stream added, broadcasting: 1\nI0511 20:19:51.181917 4025 log.go:172] (0xc000661340) Reply frame received for 1\nI0511 20:19:51.181981 4025 log.go:172] (0xc000661340) (0xc0006d4be0) Create stream\nI0511 20:19:51.182007 4025 log.go:172] (0xc000661340) (0xc0006d4be0) Stream added, broadcasting: 3\nI0511 20:19:51.182924 4025 log.go:172] (0xc000661340) Reply frame received for 3\nI0511 20:19:51.182950 4025 log.go:172] (0xc000661340) (0xc00042af00) Create stream\nI0511 20:19:51.182962 4025 log.go:172] (0xc000661340) (0xc00042af00) Stream added, broadcasting: 5\nI0511 20:19:51.183857 4025 log.go:172] (0xc000661340) Reply frame received for 5\nI0511 20:19:51.260016 4025 log.go:172] (0xc000661340) Data frame received for 5\nI0511 20:19:51.260050 4025 log.go:172] (0xc00042af00) (5) Data frame handling\nI0511 20:19:51.260063 4025 log.go:172] (0xc00042af00) (5) Data frame sent\nI0511 20:19:51.260071 4025 log.go:172] (0xc000661340) Data frame received for 5\nI0511 20:19:51.260076 4025 log.go:172] (0xc00042af00) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0511 20:19:51.260107 4025 log.go:172] (0xc000661340) Data frame received for 3\nI0511 20:19:51.260114 4025 log.go:172] (0xc0006d4be0) (3) Data frame handling\nI0511 20:19:51.261438 4025 log.go:172] (0xc000661340) Data frame received for 1\nI0511 20:19:51.261458 4025 log.go:172] (0xc000b7e280) (1) Data frame handling\nI0511 20:19:51.261473 4025 log.go:172] (0xc000b7e280) (1) Data frame sent\nI0511 20:19:51.261484 4025 log.go:172] (0xc000661340) (0xc000b7e280) Stream removed, broadcasting: 1\nI0511 20:19:51.261548 4025 log.go:172] (0xc000661340) Go away received\nI0511 20:19:51.261739 4025 log.go:172] (0xc000661340) (0xc000b7e280) Stream removed, broadcasting: 1\nI0511 20:19:51.261755 4025 log.go:172] (0xc000661340) (0xc0006d4be0) Stream removed, broadcasting: 3\nI0511 20:19:51.261762 4025 log.go:172] (0xc000661340) (0xc00042af00) Stream removed, broadcasting: 5\n" May 11 20:19:51.265: INFO: stdout: "" May 11 20:19:51.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-622 execpod-affinityvvj4g -- /bin/sh -x -c nc -zv -t -w 2 10.107.47.173 80' May 11 20:19:52.043: INFO: stderr: "I0511 20:19:51.976849 4044 log.go:172] (0xc0002fa160) (0xc000623400) Create stream\nI0511 20:19:51.976914 4044 log.go:172] (0xc0002fa160) (0xc000623400) Stream added, broadcasting: 1\nI0511 20:19:51.978596 4044 log.go:172] (0xc0002fa160) Reply frame received for 1\nI0511 20:19:51.978637 4044 log.go:172] (0xc0002fa160) (0xc0005a5680) Create stream\nI0511 20:19:51.978645 4044 log.go:172] (0xc0002fa160) (0xc0005a5680) Stream added, broadcasting: 3\nI0511 20:19:51.979439 4044 log.go:172] (0xc0002fa160) Reply frame received for 3\nI0511 20:19:51.979479 4044 log.go:172] (0xc0002fa160) (0xc000526f00) Create stream\nI0511 20:19:51.979498 4044 log.go:172] (0xc0002fa160) (0xc000526f00) Stream added, broadcasting: 5\nI0511 20:19:51.980162 4044 log.go:172] (0xc0002fa160) Reply frame received for 5\nI0511 20:19:52.037547 4044 log.go:172] (0xc0002fa160) Data frame received for 3\nI0511 20:19:52.037583 4044 log.go:172] (0xc0005a5680) (3) Data frame handling\nI0511 20:19:52.037612 4044 log.go:172] (0xc0002fa160) Data frame received for 5\nI0511 20:19:52.037625 4044 log.go:172] (0xc000526f00) (5) Data frame handling\nI0511 20:19:52.037642 4044 log.go:172] (0xc000526f00) (5) Data frame sent\n+ nc -zv -t -w 2 10.107.47.173 80\nConnection to 10.107.47.173 80 port [tcp/http] succeeded!\nI0511 20:19:52.037660 4044 log.go:172] (0xc0002fa160) Data frame received for 5\nI0511 20:19:52.037690 4044 log.go:172] (0xc000526f00) (5) Data frame handling\nI0511 20:19:52.038439 4044 log.go:172] (0xc0002fa160) Data frame received for 1\nI0511 20:19:52.038460 4044 log.go:172] (0xc000623400) (1) Data frame handling\nI0511 20:19:52.038479 4044 log.go:172] (0xc000623400) (1) Data frame sent\nI0511 20:19:52.038495 4044 log.go:172] (0xc0002fa160) (0xc000623400) Stream removed, broadcasting: 1\nI0511 20:19:52.038631 4044 log.go:172] (0xc0002fa160) Go away received\nI0511 20:19:52.038809 4044 log.go:172] (0xc0002fa160) (0xc000623400) Stream removed, broadcasting: 1\nI0511 20:19:52.038826 4044 log.go:172] (0xc0002fa160) (0xc0005a5680) Stream removed, broadcasting: 3\nI0511 20:19:52.038835 4044 log.go:172] (0xc0002fa160) (0xc000526f00) Stream removed, broadcasting: 5\n" May 11 20:19:52.043: INFO: stdout: "" May 11 20:19:52.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-622 execpod-affinityvvj4g -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.47.173:80/ ; done' May 11 20:19:53.025: INFO: stderr: "I0511 20:19:52.872484 4061 log.go:172] (0xc000a72000) (0xc0000f3cc0) Create stream\nI0511 20:19:52.872541 4061 log.go:172] (0xc000a72000) (0xc0000f3cc0) Stream added, broadcasting: 1\nI0511 20:19:52.874610 4061 log.go:172] (0xc000a72000) Reply frame received for 1\nI0511 20:19:52.874634 4061 log.go:172] (0xc000a72000) (0xc0003920a0) Create stream\nI0511 20:19:52.874641 4061 log.go:172] (0xc000a72000) (0xc0003920a0) Stream added, broadcasting: 3\nI0511 20:19:52.875318 4061 log.go:172] (0xc000a72000) Reply frame received for 3\nI0511 20:19:52.875345 4061 log.go:172] (0xc000a72000) (0xc00015f2c0) Create stream\nI0511 20:19:52.875355 4061 log.go:172] (0xc000a72000) (0xc00015f2c0) Stream added, broadcasting: 5\nI0511 20:19:52.876071 4061 log.go:172] (0xc000a72000) Reply frame received for 5\nI0511 20:19:52.947820 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.947941 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.947991 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\nI0511 20:19:52.948003 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.948007 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.948014 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.950737 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.950754 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.950762 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.951263 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.951276 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.951283 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.951300 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.951316 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.951332 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.955372 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.955393 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.955412 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.955779 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.955794 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.955800 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.955807 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.955811 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.955817 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.959849 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.959862 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.959876 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.960233 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.960249 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.960263 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.960274 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.960281 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.960285 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\nI0511 20:19:52.960290 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.960294 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.960309 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\nI0511 20:19:52.963178 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.963191 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.963203 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.963507 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.963523 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.963529 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.963535 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.963544 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.963553 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.967267 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.967282 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.967298 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.967659 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.967674 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.967685 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.967701 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.967714 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.967723 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.970502 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.970528 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.970554 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.970892 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.970911 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.970918 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.970926 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.970931 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.970936 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.974477 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.974488 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.974497 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.974831 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.974845 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.974851 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.974867 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.974879 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.974890 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.978260 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.978278 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.978293 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.978698 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.978723 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.978732 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.978747 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.978754 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.978769 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.982853 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.982878 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.982907 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.983724 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.983756 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.983767 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\nI0511 20:19:52.983776 4061 log.go:172] (0xc000a72000) Data frame received for 5\n+ I0511 20:19:52.983802 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.983834 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.983847 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.983868 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.983879 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\necho\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.988504 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.988515 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.988529 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.989099 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.989245 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.989257 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.989269 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.989275 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.989284 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.992546 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.992558 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.992566 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.992973 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.993002 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.993018 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.993039 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.993055 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.993074 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.996869 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:52.996888 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.996899 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.997344 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:52.997367 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:52.997377 4061 log.go:172] (0xc000a72000) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:52.997391 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:52.997400 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:52.997420 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\nI0511 20:19:53.002745 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:53.002764 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:53.002782 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:53.003141 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:53.003155 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:53.003164 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\nI0511 20:19:53.003172 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:53.003181 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.003208 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:53.003234 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:53.003246 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:53.003259 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\nI0511 20:19:53.008779 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:53.008802 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:53.008820 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:53.009446 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:53.009464 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:53.009475 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:53.009494 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:53.009506 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:53.009515 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.014583 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:53.014600 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:53.014611 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:53.015084 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:53.015102 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:53.015126 4061 log.go:172] (0xc00015f2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.015203 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:53.015232 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:53.015251 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:53.019161 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:53.019178 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:53.019189 4061 log.go:172] (0xc0003920a0) (3) Data frame sent\nI0511 20:19:53.019780 4061 log.go:172] (0xc000a72000) Data frame received for 5\nI0511 20:19:53.019801 4061 log.go:172] (0xc00015f2c0) (5) Data frame handling\nI0511 20:19:53.019844 4061 log.go:172] (0xc000a72000) Data frame received for 3\nI0511 20:19:53.019855 4061 log.go:172] (0xc0003920a0) (3) Data frame handling\nI0511 20:19:53.021056 4061 log.go:172] (0xc000a72000) Data frame received for 1\nI0511 20:19:53.021069 4061 log.go:172] (0xc0000f3cc0) (1) Data frame handling\nI0511 20:19:53.021087 4061 log.go:172] (0xc0000f3cc0) (1) Data frame sent\nI0511 20:19:53.021098 4061 log.go:172] (0xc000a72000) (0xc0000f3cc0) Stream removed, broadcasting: 1\nI0511 20:19:53.021312 4061 log.go:172] (0xc000a72000) Go away received\nI0511 20:19:53.021453 4061 log.go:172] (0xc000a72000) (0xc0000f3cc0) Stream removed, broadcasting: 1\nI0511 20:19:53.021467 4061 log.go:172] (0xc000a72000) (0xc0003920a0) Stream removed, broadcasting: 3\nI0511 20:19:53.021476 4061 log.go:172] (0xc000a72000) (0xc00015f2c0) Stream removed, broadcasting: 5\n" May 11 20:19:53.025: INFO: stdout: "\naffinity-clusterip-transition-hsvf8\naffinity-clusterip-transition-p555q\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-p555q\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-p555q\naffinity-clusterip-transition-p555q\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-hsvf8\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-p555q\naffinity-clusterip-transition-2hrnq" May 11 20:19:53.025: INFO: Received response from host: May 11 20:19:53.025: INFO: Received response from host: affinity-clusterip-transition-hsvf8 May 11 20:19:53.025: INFO: Received response from host: affinity-clusterip-transition-p555q May 11 20:19:53.025: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-p555q May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-p555q May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-p555q May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-hsvf8 May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-p555q May 11 20:19:53.026: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-622 execpod-affinityvvj4g -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.47.173:80/ ; done' May 11 20:19:53.912: INFO: stderr: "I0511 20:19:53.781510 4077 log.go:172] (0xc000407ad0) (0xc00061f0e0) Create stream\nI0511 20:19:53.781576 4077 log.go:172] (0xc000407ad0) (0xc00061f0e0) Stream added, broadcasting: 1\nI0511 20:19:53.783857 4077 log.go:172] (0xc000407ad0) Reply frame received for 1\nI0511 20:19:53.783879 4077 log.go:172] (0xc000407ad0) (0xc0005e4e60) Create stream\nI0511 20:19:53.783888 4077 log.go:172] (0xc000407ad0) (0xc0005e4e60) Stream added, broadcasting: 3\nI0511 20:19:53.784650 4077 log.go:172] (0xc000407ad0) Reply frame received for 3\nI0511 20:19:53.784693 4077 log.go:172] (0xc000407ad0) (0xc00061f680) Create stream\nI0511 20:19:53.784701 4077 log.go:172] (0xc000407ad0) (0xc00061f680) Stream added, broadcasting: 5\nI0511 20:19:53.785766 4077 log.go:172] (0xc000407ad0) Reply frame received for 5\nI0511 20:19:53.842230 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.842259 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.842268 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.842281 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.842287 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.842296 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.842725 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.842740 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.842752 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.843193 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.843211 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.843220 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.843231 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.843239 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.843247 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.846720 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.846759 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.846792 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.847070 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.847092 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.847102 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.847115 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.847122 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.847131 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.847139 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.847152 4077 log.go:172] (0xc00061f680) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.847169 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.851011 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.851026 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.851042 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.851402 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.851422 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.851431 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.851441 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.851447 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.851454 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.855012 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.855032 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.855046 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.855445 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.855456 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.855473 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.855506 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.855527 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.855551 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.859154 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.859173 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.859190 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.859590 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.859630 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.859650 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.859675 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.859705 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.859728 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.859746 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.859766 4077 log.go:172] (0xc00061f680) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.859801 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.863447 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.863468 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.863490 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.863782 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.863802 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.863811 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.863820 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.863831 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.863842 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.867702 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.867739 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.867761 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.869255 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.869272 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.869284 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.869305 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.869326 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.869343 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.872526 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.872538 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.872547 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.873005 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.873026 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.873039 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\nI0511 20:19:53.873056 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.873071 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.873081 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.873100 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.873237 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.873251 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.877477 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.877488 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.877495 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.878483 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.878494 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.878501 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.878508 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.878514 4077 log.go:172] (0xc00061f680) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.878527 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.878554 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.878574 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.878593 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.882606 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.882619 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.882624 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.883193 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.883213 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.883224 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.883235 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.883239 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.883243 4077 log.go:172] (0xc00061f680) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.883254 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.883259 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.883263 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.888941 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.888967 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.888986 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.889836 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.889868 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.889881 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.889897 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.889910 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.889919 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.893101 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.893299 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.893324 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.893551 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.893566 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.893576 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.893585 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.893593 4077 log.go:172] (0xc00061f680) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.893610 4077 log.go:172] (0xc00061f680) (5) Data frame sent\nI0511 20:19:53.893629 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.893638 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.893650 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.896981 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.896996 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.897009 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.898066 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.898079 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.898091 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.898124 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.898134 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.898139 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.901721 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.901741 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.901754 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.901981 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.901992 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.902005 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.902046 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.902069 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.902086 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.904899 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.904916 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.904935 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.905337 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.905349 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.905358 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.905472 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.905483 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.905492 4077 log.go:172] (0xc00061f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.47.173:80/\nI0511 20:19:53.907925 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.907934 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.907941 4077 log.go:172] (0xc0005e4e60) (3) Data frame sent\nI0511 20:19:53.908234 4077 log.go:172] (0xc000407ad0) Data frame received for 3\nI0511 20:19:53.908285 4077 log.go:172] (0xc0005e4e60) (3) Data frame handling\nI0511 20:19:53.908332 4077 log.go:172] (0xc000407ad0) Data frame received for 5\nI0511 20:19:53.908347 4077 log.go:172] (0xc00061f680) (5) Data frame handling\nI0511 20:19:53.909513 4077 log.go:172] (0xc000407ad0) Data frame received for 1\nI0511 20:19:53.909526 4077 log.go:172] (0xc00061f0e0) (1) Data frame handling\nI0511 20:19:53.909532 4077 log.go:172] (0xc00061f0e0) (1) Data frame sent\nI0511 20:19:53.909538 4077 log.go:172] (0xc000407ad0) (0xc00061f0e0) Stream removed, broadcasting: 1\nI0511 20:19:53.909548 4077 log.go:172] (0xc000407ad0) Go away received\nI0511 20:19:53.909738 4077 log.go:172] (0xc000407ad0) (0xc00061f0e0) Stream removed, broadcasting: 1\nI0511 20:19:53.909747 4077 log.go:172] (0xc000407ad0) (0xc0005e4e60) Stream removed, broadcasting: 3\nI0511 20:19:53.909751 4077 log.go:172] (0xc000407ad0) (0xc00061f680) Stream removed, broadcasting: 5\n" May 11 20:19:53.912: INFO: stdout: "\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq\naffinity-clusterip-transition-2hrnq" May 11 20:19:53.912: INFO: Received response from host: May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Received response from host: affinity-clusterip-transition-2hrnq May 11 20:19:53.912: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-622, will wait for the garbage collector to delete the pods May 11 20:19:54.702: INFO: Deleting ReplicationController affinity-clusterip-transition took: 12.951782ms May 11 20:19:55.702: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 1.000200416s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:20:20.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-622" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:65.294 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":246,"skipped":3958,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:20:22.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 11 20:20:27.532: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 11 20:20:30.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825227, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825227, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825227, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724825227, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:20:34.107: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:20:34.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:20:38.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7242" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:19.865 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":247,"skipped":3984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:20:42.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 11 20:20:45.163: INFO: created pod pod-service-account-defaultsa May 11 20:20:45.163: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 11 20:20:45.239: INFO: created pod pod-service-account-mountsa May 11 20:20:45.239: INFO: pod pod-service-account-mountsa service account token volume mount: true May 11 20:20:45.340: INFO: created pod pod-service-account-nomountsa May 11 20:20:45.340: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 11 20:20:45.349: INFO: created pod pod-service-account-defaultsa-mountspec May 11 20:20:45.349: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 11 20:20:45.439: INFO: created pod pod-service-account-mountsa-mountspec May 11 20:20:45.439: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 11 20:20:45.556: INFO: created pod pod-service-account-nomountsa-mountspec May 11 20:20:45.556: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 11 20:20:45.626: INFO: created pod pod-service-account-defaultsa-nomountspec May 11 20:20:45.626: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 11 20:20:45.705: INFO: created pod pod-service-account-mountsa-nomountspec May 11 20:20:45.705: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 11 20:20:45.759: INFO: created pod pod-service-account-nomountsa-nomountspec May 11 20:20:45.759: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:20:45.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6079" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":248,"skipped":4011,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:20:46.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8442.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8442.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8442.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8442.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8442.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8442.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 20:21:15.081: INFO: DNS probes using dns-8442/dns-test-4e595f69-a1a8-4ce4-9c00-27b57b800bb6 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:21:15.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8442" for this suite. • [SLOW TEST:29.544 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":249,"skipped":4073,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:21:15.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 11 20:21:16.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9867' May 11 20:21:18.163: INFO: stderr: "" May 11 20:21:18.163: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 20:21:19.195: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:21:19.195: INFO: Found 0 / 1 May 11 20:21:21.023: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:21:21.023: INFO: Found 0 / 1 May 11 20:21:21.310: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:21:21.310: INFO: Found 0 / 1 May 11 20:21:22.220: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:21:22.220: INFO: Found 0 / 1 May 11 20:21:23.167: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:21:23.167: INFO: Found 0 / 1 May 11 20:21:24.166: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:21:24.166: INFO: Found 1 / 1 May 11 20:21:24.166: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 11 20:21:24.168: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:21:24.168: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 20:21:24.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-7c6tc --namespace=kubectl-9867 -p {"metadata":{"annotations":{"x":"y"}}}' May 11 20:21:24.254: INFO: stderr: "" May 11 20:21:24.254: INFO: stdout: "pod/agnhost-master-7c6tc patched\n" STEP: checking annotations May 11 20:21:24.309: INFO: Selector matched 1 pods for map[app:agnhost] May 11 20:21:24.309: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:21:24.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9867" for this suite. • [SLOW TEST:8.525 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":250,"skipped":4076,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:21:24.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:21:24.723: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 11 20:21:24.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:24.831: INFO: Number of nodes with available pods: 0 May 11 20:21:24.831: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:25.910: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:25.912: INFO: Number of nodes with available pods: 0 May 11 20:21:25.912: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:26.836: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:26.839: INFO: Number of nodes with available pods: 0 May 11 20:21:26.839: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:28.211: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:28.214: INFO: Number of nodes with available pods: 0 May 11 20:21:28.214: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:28.839: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:28.912: INFO: Number of nodes with available pods: 0 May 11 20:21:28.912: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:29.834: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:29.836: INFO: Number of nodes with available pods: 0 May 11 20:21:29.836: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:31.446: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:31.449: INFO: Number of nodes with available pods: 2 May 11 20:21:31.449: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 11 20:21:31.964: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:31.964: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:32.234: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:33.592: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:33.592: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:33.597: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:34.263: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:34.263: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:34.263: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:34.268: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:35.293: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:35.293: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:35.293: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:35.299: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:36.532: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:36.532: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:36.532: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:36.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:37.281: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:37.281: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:37.281: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:37.286: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:38.237: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:38.237: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:38.237: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:38.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:39.237: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:39.237: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:39.237: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:39.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:40.238: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:40.238: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:40.238: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:40.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:41.237: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:41.237: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:41.237: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:41.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:42.335: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:42.335: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:42.335: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:42.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:43.279: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:43.279: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:43.279: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:43.284: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:44.240: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:44.240: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:44.240: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:44.245: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:45.257: INFO: Wrong image for pod: daemon-set-jjwcb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:45.257: INFO: Pod daemon-set-jjwcb is not available May 11 20:21:45.257: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:45.262: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:46.251: INFO: Pod daemon-set-g6g4h is not available May 11 20:21:46.251: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:46.256: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:47.701: INFO: Pod daemon-set-g6g4h is not available May 11 20:21:47.701: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:47.707: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:48.401: INFO: Pod daemon-set-g6g4h is not available May 11 20:21:48.401: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:48.413: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:49.275: INFO: Pod daemon-set-g6g4h is not available May 11 20:21:49.275: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:49.278: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:50.346: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:50.353: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:51.238: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:51.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:52.476: INFO: Wrong image for pod: daemon-set-qt8sr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 11 20:21:52.476: INFO: Pod daemon-set-qt8sr is not available May 11 20:21:52.479: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:53.275: INFO: Pod daemon-set-w95dz is not available May 11 20:21:53.360: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 11 20:21:53.369: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:53.392: INFO: Number of nodes with available pods: 1 May 11 20:21:53.393: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:54.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:54.402: INFO: Number of nodes with available pods: 1 May 11 20:21:54.402: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:55.474: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:55.549: INFO: Number of nodes with available pods: 1 May 11 20:21:55.549: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:56.396: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:56.398: INFO: Number of nodes with available pods: 1 May 11 20:21:56.398: INFO: Node latest-worker is running more than one daemon pod May 11 20:21:57.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:21:57.402: INFO: Number of nodes with available pods: 2 May 11 20:21:57.402: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-86, will wait for the garbage collector to delete the pods May 11 20:21:57.470: INFO: Deleting DaemonSet.extensions daemon-set took: 5.777072ms May 11 20:21:57.870: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.214013ms May 11 20:22:05.272: INFO: Number of nodes with available pods: 0 May 11 20:22:05.272: INFO: Number of running nodes: 0, number of available pods: 0 May 11 20:22:05.274: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-86/daemonsets","resourceVersion":"3562554"},"items":null} May 11 20:22:05.276: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-86/pods","resourceVersion":"3562554"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:22:05.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-86" for this suite. • [SLOW TEST:40.974 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":251,"skipped":4079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:22:05.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-8c616d80-c264-4325-9111-d6ae8ce85025 in namespace container-probe-4270 May 11 20:22:11.436: INFO: Started pod liveness-8c616d80-c264-4325-9111-d6ae8ce85025 in namespace container-probe-4270 STEP: checking the pod's current state and verifying that restartCount is present May 11 20:22:11.438: INFO: Initial restart count of pod liveness-8c616d80-c264-4325-9111-d6ae8ce85025 is 0 May 11 20:22:30.414: INFO: Restart count of pod container-probe-4270/liveness-8c616d80-c264-4325-9111-d6ae8ce85025 is now 1 (18.975731454s elapsed) May 11 20:22:48.443: INFO: Restart count of pod container-probe-4270/liveness-8c616d80-c264-4325-9111-d6ae8ce85025 is now 2 (37.005629504s elapsed) May 11 20:23:08.491: INFO: Restart count of pod container-probe-4270/liveness-8c616d80-c264-4325-9111-d6ae8ce85025 is now 3 (57.052709563s elapsed) May 11 20:23:33.606: INFO: Restart count of pod container-probe-4270/liveness-8c616d80-c264-4325-9111-d6ae8ce85025 is now 4 (1m22.167844137s elapsed) May 11 20:24:28.627: INFO: Restart count of pod container-probe-4270/liveness-8c616d80-c264-4325-9111-d6ae8ce85025 is now 5 (2m17.188662089s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:24:28.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4270" for this suite. • [SLOW TEST:143.375 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":252,"skipped":4110,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:24:28.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 20:24:29.004: INFO: Waiting up to 5m0s for pod "pod-6a66b9e1-b508-4a05-b328-d9fa0b291492" in namespace "emptydir-2695" to be "Succeeded or Failed" May 11 20:24:29.014: INFO: Pod "pod-6a66b9e1-b508-4a05-b328-d9fa0b291492": Phase="Pending", Reason="", readiness=false. Elapsed: 9.772161ms May 11 20:24:31.086: INFO: Pod "pod-6a66b9e1-b508-4a05-b328-d9fa0b291492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08265203s May 11 20:24:33.091: INFO: Pod "pod-6a66b9e1-b508-4a05-b328-d9fa0b291492": Phase="Running", Reason="", readiness=true. Elapsed: 4.087183648s May 11 20:24:35.263: INFO: Pod "pod-6a66b9e1-b508-4a05-b328-d9fa0b291492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.259576871s STEP: Saw pod success May 11 20:24:35.263: INFO: Pod "pod-6a66b9e1-b508-4a05-b328-d9fa0b291492" satisfied condition "Succeeded or Failed" May 11 20:24:35.266: INFO: Trying to get logs from node latest-worker pod pod-6a66b9e1-b508-4a05-b328-d9fa0b291492 container test-container: STEP: delete the pod May 11 20:24:36.134: INFO: Waiting for pod pod-6a66b9e1-b508-4a05-b328-d9fa0b291492 to disappear May 11 20:24:36.180: INFO: Pod pod-6a66b9e1-b508-4a05-b328-d9fa0b291492 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:24:36.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2695" for this suite. • [SLOW TEST:7.521 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4119,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:24:36.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-54743681-d3a2-402a-a390-5b8dfbcf900a in namespace container-probe-2317 May 11 20:24:42.717: INFO: Started pod busybox-54743681-d3a2-402a-a390-5b8dfbcf900a in namespace container-probe-2317 STEP: checking the pod's current state and verifying that restartCount is present May 11 20:24:42.720: INFO: Initial restart count of pod busybox-54743681-d3a2-402a-a390-5b8dfbcf900a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:28:44.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2317" for this suite. • [SLOW TEST:248.230 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":254,"skipped":4129,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:28:44.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-f61536b3-a720-4bfc-8367-e49bd8f0cd18 STEP: Creating a pod to test consume secrets May 11 20:28:44.506: INFO: Waiting up to 5m0s for pod "pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f" in namespace "secrets-4599" to be "Succeeded or Failed" May 11 20:28:44.553: INFO: Pod "pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.647547ms May 11 20:28:46.556: INFO: Pod "pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050262424s May 11 20:28:48.901: INFO: Pod "pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394820735s May 11 20:28:50.954: INFO: Pod "pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.447944066s STEP: Saw pod success May 11 20:28:50.954: INFO: Pod "pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f" satisfied condition "Succeeded or Failed" May 11 20:28:50.958: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f container secret-volume-test: STEP: delete the pod May 11 20:28:51.325: INFO: Waiting for pod pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f to disappear May 11 20:28:51.523: INFO: Pod pod-secrets-2a01ffef-d6f0-4408-9fd0-d34a3f2d4a6f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:28:51.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4599" for this suite. • [SLOW TEST:7.145 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4130,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:28:51.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 11 20:28:52.270: INFO: >>> kubeConfig: /root/.kube/config May 11 20:28:55.445: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:29:06.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5410" for this suite. • [SLOW TEST:14.578 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":256,"skipped":4130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:29:06.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-mrg6r in namespace proxy-704 I0511 20:29:06.763082 7 runners.go:190] Created replication controller with name: proxy-service-mrg6r, namespace: proxy-704, replica count: 1 I0511 20:29:07.813445 7 runners.go:190] proxy-service-mrg6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:29:08.813647 7 runners.go:190] proxy-service-mrg6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:29:09.813866 7 runners.go:190] proxy-service-mrg6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:29:10.814104 7 runners.go:190] proxy-service-mrg6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:29:11.814310 7 runners.go:190] proxy-service-mrg6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 20:29:12.814530 7 runners.go:190] proxy-service-mrg6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 20:29:13.814700 7 runners.go:190] proxy-service-mrg6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 20:29:14.814923 7 runners.go:190] proxy-service-mrg6r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 20:29:14.822: INFO: setup took 8.428577306s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 11 20:29:14.828: INFO: (0) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 5.273112ms) May 11 20:29:14.829: INFO: (0) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 6.334607ms) May 11 20:29:14.829: INFO: (0) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname1/proxy/: foo (200; 6.549829ms) May 11 20:29:14.829: INFO: (0) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testtest (200; 10.853211ms) May 11 20:29:14.833: INFO: (0) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname1/proxy/: foo (200; 10.821166ms) May 11 20:29:14.833: INFO: (0) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 10.801747ms) May 11 20:29:14.834: INFO: (0) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 11.348429ms) May 11 20:29:14.834: INFO: (0) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 11.495941ms) May 11 20:29:14.834: INFO: (0) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 11.858294ms) May 11 20:29:14.835: INFO: (0) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 13.018955ms) May 11 20:29:14.835: INFO: (0) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 13.203223ms) May 11 20:29:14.836: INFO: (0) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: testt... (200; 3.370295ms) May 11 20:29:14.842: INFO: (1) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 3.422202ms) May 11 20:29:14.842: INFO: (1) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 3.493878ms) May 11 20:29:14.843: INFO: (1) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 4.079604ms) May 11 20:29:14.843: INFO: (1) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 4.578672ms) May 11 20:29:14.844: INFO: (1) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 5.182494ms) May 11 20:29:14.844: INFO: (1) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname1/proxy/: foo (200; 5.452614ms) May 11 20:29:14.844: INFO: (1) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname1/proxy/: foo (200; 5.44281ms) May 11 20:29:14.844: INFO: (1) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 5.619678ms) May 11 20:29:14.849: INFO: (2) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 4.427679ms) May 11 20:29:14.849: INFO: (2) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 4.72142ms) May 11 20:29:14.849: INFO: (2) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname1/proxy/: foo (200; 4.992921ms) May 11 20:29:14.849: INFO: (2) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 4.923591ms) May 11 20:29:14.849: INFO: (2) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 4.910109ms) May 11 20:29:14.850: INFO: (2) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 5.114043ms) May 11 20:29:14.850: INFO: (2) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 5.126191ms) May 11 20:29:14.850: INFO: (2) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 5.218536ms) May 11 20:29:14.850: INFO: (2) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testtesttest (200; 4.745423ms) May 11 20:29:14.856: INFO: (3) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 4.766204ms) May 11 20:29:14.856: INFO: (3) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 4.807024ms) May 11 20:29:14.856: INFO: (3) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 4.790441ms) May 11 20:29:14.856: INFO: (3) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 4.837933ms) May 11 20:29:14.856: INFO: (3) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 4.816289ms) May 11 20:29:14.856: INFO: (3) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: testt... (200; 4.175804ms) May 11 20:29:14.862: INFO: (4) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 4.510197ms) May 11 20:29:14.862: INFO: (4) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 4.586988ms) May 11 20:29:14.862: INFO: (4) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: t... (200; 5.424952ms) May 11 20:29:14.869: INFO: (5) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 5.595742ms) May 11 20:29:14.871: INFO: (5) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 6.623852ms) May 11 20:29:14.872: INFO: (5) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 7.53421ms) May 11 20:29:14.872: INFO: (5) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testtest (200; 8.470393ms) May 11 20:29:14.872: INFO: (5) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 7.906854ms) May 11 20:29:14.873: INFO: (5) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 8.012959ms) May 11 20:29:14.873: INFO: (5) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 8.363346ms) May 11 20:29:14.873: INFO: (5) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 8.437786ms) May 11 20:29:14.873: INFO: (5) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname1/proxy/: foo (200; 8.298923ms) May 11 20:29:14.873: INFO: (5) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 8.954525ms) May 11 20:29:14.876: INFO: (6) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 3.386327ms) May 11 20:29:14.878: INFO: (6) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 4.921476ms) May 11 20:29:14.878: INFO: (6) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 5.019096ms) May 11 20:29:14.879: INFO: (6) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 5.796987ms) May 11 20:29:14.879: INFO: (6) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 5.891097ms) May 11 20:29:14.879: INFO: (6) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 5.862605ms) May 11 20:29:14.879: INFO: (6) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 5.94423ms) May 11 20:29:14.879: INFO: (6) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: testtesttest (200; 4.539238ms) May 11 20:29:14.885: INFO: (7) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 4.581733ms) May 11 20:29:14.885: INFO: (7) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 4.586634ms) May 11 20:29:14.885: INFO: (7) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 4.570045ms) May 11 20:29:14.885: INFO: (7) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 4.651603ms) May 11 20:29:14.885: INFO: (7) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 4.835634ms) May 11 20:29:14.885: INFO: (7) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 5.033615ms) May 11 20:29:14.888: INFO: (8) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 2.763739ms) May 11 20:29:14.888: INFO: (8) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 2.79833ms) May 11 20:29:14.889: INFO: (8) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 3.843396ms) May 11 20:29:14.889: INFO: (8) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 3.798648ms) May 11 20:29:14.889: INFO: (8) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname1/proxy/: foo (200; 3.87167ms) May 11 20:29:14.889: INFO: (8) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 3.79695ms) May 11 20:29:14.889: INFO: (8) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 3.9332ms) May 11 20:29:14.890: INFO: (8) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 4.496883ms) May 11 20:29:14.890: INFO: (8) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 4.599291ms) May 11 20:29:14.890: INFO: (8) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 4.597572ms) May 11 20:29:14.890: INFO: (8) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testtest (200; 2.5846ms) May 11 20:29:14.894: INFO: (9) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: t... (200; 5.104134ms) May 11 20:29:14.897: INFO: (9) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 5.489562ms) May 11 20:29:14.897: INFO: (9) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 5.635061ms) May 11 20:29:14.897: INFO: (9) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testtest (200; 2.785126ms) May 11 20:29:14.901: INFO: (10) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 3.222359ms) May 11 20:29:14.901: INFO: (10) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 3.561389ms) May 11 20:29:14.901: INFO: (10) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 3.676141ms) May 11 20:29:14.901: INFO: (10) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testt... (200; 3.626525ms) May 11 20:29:14.902: INFO: (10) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 4.288459ms) May 11 20:29:14.902: INFO: (10) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 4.521635ms) May 11 20:29:14.902: INFO: (10) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 4.579767ms) May 11 20:29:14.902: INFO: (10) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname1/proxy/: foo (200; 4.514958ms) May 11 20:29:14.902: INFO: (10) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: t... (200; 3.267837ms) May 11 20:29:14.906: INFO: (11) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 3.14216ms) May 11 20:29:14.907: INFO: (11) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 3.646611ms) May 11 20:29:14.907: INFO: (11) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: testtest (200; 3.661359ms) May 11 20:29:14.907: INFO: (11) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname1/proxy/: foo (200; 3.812152ms) May 11 20:29:14.907: INFO: (11) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 4.148825ms) May 11 20:29:14.907: INFO: (11) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 3.937868ms) May 11 20:29:14.908: INFO: (11) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 4.881883ms) May 11 20:29:14.908: INFO: (11) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname1/proxy/: foo (200; 4.155626ms) May 11 20:29:14.908: INFO: (11) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 4.689324ms) May 11 20:29:14.908: INFO: (11) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 5.05944ms) May 11 20:29:14.910: INFO: (12) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: testtest (200; 4.750391ms) May 11 20:29:14.913: INFO: (12) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 4.807699ms) May 11 20:29:14.913: INFO: (12) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 4.855954ms) May 11 20:29:14.913: INFO: (12) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 4.895962ms) May 11 20:29:14.913: INFO: (12) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 5.171358ms) May 11 20:29:14.913: INFO: (12) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 5.147885ms) May 11 20:29:14.913: INFO: (12) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 5.142646ms) May 11 20:29:14.913: INFO: (12) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 5.21957ms) May 11 20:29:14.913: INFO: (12) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 5.274279ms) May 11 20:29:14.916: INFO: (13) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 2.799521ms) May 11 20:29:14.916: INFO: (13) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: t... (200; 3.062432ms) May 11 20:29:14.916: INFO: (13) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 3.006159ms) May 11 20:29:14.917: INFO: (13) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 3.536883ms) May 11 20:29:14.917: INFO: (13) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testtestt... (200; 4.573689ms) May 11 20:29:14.923: INFO: (14) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 4.591122ms) May 11 20:29:14.923: INFO: (14) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 4.566457ms) May 11 20:29:14.923: INFO: (14) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 4.934291ms) May 11 20:29:14.923: INFO: (14) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 4.90024ms) May 11 20:29:14.923: INFO: (14) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: test (200; 4.955854ms) May 11 20:29:14.923: INFO: (14) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 5.000556ms) May 11 20:29:14.926: INFO: (15) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testt... (200; 8.197008ms) May 11 20:29:14.932: INFO: (15) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 8.773548ms) May 11 20:29:14.932: INFO: (15) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 8.76466ms) May 11 20:29:14.932: INFO: (15) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 8.727464ms) May 11 20:29:14.932: INFO: (15) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 9.206415ms) May 11 20:29:14.932: INFO: (15) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 9.187095ms) May 11 20:29:14.933: INFO: (15) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 9.991826ms) May 11 20:29:14.933: INFO: (15) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 10.037557ms) May 11 20:29:14.933: INFO: (15) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname1/proxy/: foo (200; 10.009816ms) May 11 20:29:14.933: INFO: (15) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 10.021786ms) May 11 20:29:14.933: INFO: (15) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: testt... (200; 4.178911ms) May 11 20:29:14.938: INFO: (16) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 4.216159ms) May 11 20:29:14.938: INFO: (16) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 4.284599ms) May 11 20:29:14.938: INFO: (16) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 4.178805ms) May 11 20:29:14.938: INFO: (16) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 4.1997ms) May 11 20:29:14.938: INFO: (16) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 4.263301ms) May 11 20:29:14.941: INFO: (17) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:1080/proxy/: testtest (200; 3.507944ms) May 11 20:29:14.941: INFO: (17) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 3.549038ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 4.494702ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 4.461015ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 4.461ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname1/proxy/: foo (200; 4.466471ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname2/proxy/: bar (200; 4.472285ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 4.61626ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 4.649663ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:460/proxy/: tls baz (200; 4.73759ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 4.696475ms) May 11 20:29:14.942: INFO: (17) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname1/proxy/: foo (200; 4.731164ms) May 11 20:29:14.943: INFO: (17) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: test (200; 2.611524ms) May 11 20:29:14.945: INFO: (18) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:1080/proxy/: t... (200; 2.738948ms) May 11 20:29:14.945: INFO: (18) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:162/proxy/: bar (200; 2.805553ms) May 11 20:29:14.945: INFO: (18) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 2.85469ms) May 11 20:29:14.946: INFO: (18) /api/v1/namespaces/proxy-704/pods/http:proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 2.725584ms) May 11 20:29:14.947: INFO: (18) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:462/proxy/: tls qux (200; 3.851137ms) May 11 20:29:14.947: INFO: (18) /api/v1/namespaces/proxy-704/pods/https:proxy-service-mrg6r-6jz9l:443/proxy/: testtestt... (200; 6.339722ms) May 11 20:29:14.957: INFO: (19) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l:160/proxy/: foo (200; 6.444674ms) May 11 20:29:14.957: INFO: (19) /api/v1/namespaces/proxy-704/pods/proxy-service-mrg6r-6jz9l/proxy/: test (200; 6.538616ms) May 11 20:29:14.957: INFO: (19) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname2/proxy/: tls qux (200; 6.960317ms) May 11 20:29:14.958: INFO: (19) /api/v1/namespaces/proxy-704/services/proxy-service-mrg6r:portname2/proxy/: bar (200; 7.249667ms) May 11 20:29:14.958: INFO: (19) /api/v1/namespaces/proxy-704/services/http:proxy-service-mrg6r:portname1/proxy/: foo (200; 7.9408ms) May 11 20:29:14.959: INFO: (19) /api/v1/namespaces/proxy-704/services/https:proxy-service-mrg6r:tlsportname1/proxy/: tls baz (200; 8.179838ms) STEP: deleting ReplicationController proxy-service-mrg6r in namespace proxy-704, will wait for the garbage collector to delete the pods May 11 20:29:15.016: INFO: Deleting ReplicationController proxy-service-mrg6r took: 5.352313ms May 11 20:29:15.316: INFO: Terminating ReplicationController proxy-service-mrg6r pods took: 300.198644ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:29:24.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-704" for this suite. • [SLOW TEST:18.780 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":257,"skipped":4162,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:29:24.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9577 STEP: creating service affinity-nodeport in namespace services-9577 STEP: creating replication controller affinity-nodeport in namespace services-9577 I0511 20:29:25.159055 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9577, replica count: 3 I0511 20:29:28.209606 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:29:31.209807 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:29:34.210111 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:29:37.212238 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 20:29:37.227: INFO: Creating new exec pod May 11 20:29:44.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9577 execpod-affinity67dxl -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 11 20:29:48.563: INFO: stderr: "I0511 20:29:48.484415 4132 log.go:172] (0xc00003a580) (0xc000636280) Create stream\nI0511 20:29:48.484445 4132 log.go:172] (0xc00003a580) (0xc000636280) Stream added, broadcasting: 1\nI0511 20:29:48.488636 4132 log.go:172] (0xc00003a580) Reply frame received for 1\nI0511 20:29:48.488701 4132 log.go:172] (0xc00003a580) (0xc000636780) Create stream\nI0511 20:29:48.488715 4132 log.go:172] (0xc00003a580) (0xc000636780) Stream added, broadcasting: 3\nI0511 20:29:48.489730 4132 log.go:172] (0xc00003a580) Reply frame received for 3\nI0511 20:29:48.489755 4132 log.go:172] (0xc00003a580) (0xc000689220) Create stream\nI0511 20:29:48.489773 4132 log.go:172] (0xc00003a580) (0xc000689220) Stream added, broadcasting: 5\nI0511 20:29:48.493637 4132 log.go:172] (0xc00003a580) Reply frame received for 5\nI0511 20:29:48.556484 4132 log.go:172] (0xc00003a580) Data frame received for 5\nI0511 20:29:48.556518 4132 log.go:172] (0xc000689220) (5) Data frame handling\nI0511 20:29:48.556549 4132 log.go:172] (0xc000689220) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0511 20:29:48.556757 4132 log.go:172] (0xc00003a580) Data frame received for 5\nI0511 20:29:48.556780 4132 log.go:172] (0xc000689220) (5) Data frame handling\nI0511 20:29:48.556796 4132 log.go:172] (0xc000689220) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0511 20:29:48.556918 4132 log.go:172] (0xc00003a580) Data frame received for 5\nI0511 20:29:48.556932 4132 log.go:172] (0xc000689220) (5) Data frame handling\nI0511 20:29:48.557475 4132 log.go:172] (0xc00003a580) Data frame received for 3\nI0511 20:29:48.557495 4132 log.go:172] (0xc000636780) (3) Data frame handling\nI0511 20:29:48.558902 4132 log.go:172] (0xc00003a580) Data frame received for 1\nI0511 20:29:48.558927 4132 log.go:172] (0xc000636280) (1) Data frame handling\nI0511 20:29:48.558948 4132 log.go:172] (0xc000636280) (1) Data frame sent\nI0511 20:29:48.558975 4132 log.go:172] (0xc00003a580) (0xc000636280) Stream removed, broadcasting: 1\nI0511 20:29:48.558996 4132 log.go:172] (0xc00003a580) Go away received\nI0511 20:29:48.559344 4132 log.go:172] (0xc00003a580) (0xc000636280) Stream removed, broadcasting: 1\nI0511 20:29:48.559377 4132 log.go:172] (0xc00003a580) (0xc000636780) Stream removed, broadcasting: 3\nI0511 20:29:48.559387 4132 log.go:172] (0xc00003a580) (0xc000689220) Stream removed, broadcasting: 5\n" May 11 20:29:48.563: INFO: stdout: "" May 11 20:29:48.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9577 execpod-affinity67dxl -- /bin/sh -x -c nc -zv -t -w 2 10.101.85.236 80' May 11 20:29:48.750: INFO: stderr: "I0511 20:29:48.681908 4165 log.go:172] (0xc0009bb6b0) (0xc000550fa0) Create stream\nI0511 20:29:48.681943 4165 log.go:172] (0xc0009bb6b0) (0xc000550fa0) Stream added, broadcasting: 1\nI0511 20:29:48.684708 4165 log.go:172] (0xc0009bb6b0) Reply frame received for 1\nI0511 20:29:48.684727 4165 log.go:172] (0xc0009bb6b0) (0xc0001b75e0) Create stream\nI0511 20:29:48.684743 4165 log.go:172] (0xc0009bb6b0) (0xc0001b75e0) Stream added, broadcasting: 3\nI0511 20:29:48.685670 4165 log.go:172] (0xc0009bb6b0) Reply frame received for 3\nI0511 20:29:48.685686 4165 log.go:172] (0xc0009bb6b0) (0xc000402280) Create stream\nI0511 20:29:48.685692 4165 log.go:172] (0xc0009bb6b0) (0xc000402280) Stream added, broadcasting: 5\nI0511 20:29:48.686462 4165 log.go:172] (0xc0009bb6b0) Reply frame received for 5\nI0511 20:29:48.744187 4165 log.go:172] (0xc0009bb6b0) Data frame received for 5\nI0511 20:29:48.744228 4165 log.go:172] (0xc000402280) (5) Data frame handling\nI0511 20:29:48.744257 4165 log.go:172] (0xc000402280) (5) Data frame sent\n+ nc -zv -t -w 2 10.101.85.236 80\nConnection to 10.101.85.236 80 port [tcp/http] succeeded!\nI0511 20:29:48.744295 4165 log.go:172] (0xc0009bb6b0) Data frame received for 3\nI0511 20:29:48.744312 4165 log.go:172] (0xc0001b75e0) (3) Data frame handling\nI0511 20:29:48.744856 4165 log.go:172] (0xc0009bb6b0) Data frame received for 5\nI0511 20:29:48.744882 4165 log.go:172] (0xc000402280) (5) Data frame handling\nI0511 20:29:48.746984 4165 log.go:172] (0xc0009bb6b0) Data frame received for 1\nI0511 20:29:48.747004 4165 log.go:172] (0xc000550fa0) (1) Data frame handling\nI0511 20:29:48.747018 4165 log.go:172] (0xc000550fa0) (1) Data frame sent\nI0511 20:29:48.747034 4165 log.go:172] (0xc0009bb6b0) (0xc000550fa0) Stream removed, broadcasting: 1\nI0511 20:29:48.747329 4165 log.go:172] (0xc0009bb6b0) (0xc000550fa0) Stream removed, broadcasting: 1\nI0511 20:29:48.747352 4165 log.go:172] (0xc0009bb6b0) (0xc0001b75e0) Stream removed, broadcasting: 3\nI0511 20:29:48.747480 4165 log.go:172] (0xc0009bb6b0) (0xc000402280) Stream removed, broadcasting: 5\n" May 11 20:29:48.750: INFO: stdout: "" May 11 20:29:48.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9577 execpod-affinity67dxl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32186' May 11 20:29:48.937: INFO: stderr: "I0511 20:29:48.863616 4185 log.go:172] (0xc00078e0b0) (0xc000370e60) Create stream\nI0511 20:29:48.863654 4185 log.go:172] (0xc00078e0b0) (0xc000370e60) Stream added, broadcasting: 1\nI0511 20:29:48.865853 4185 log.go:172] (0xc00078e0b0) Reply frame received for 1\nI0511 20:29:48.865883 4185 log.go:172] (0xc00078e0b0) (0xc00024c6e0) Create stream\nI0511 20:29:48.865893 4185 log.go:172] (0xc00078e0b0) (0xc00024c6e0) Stream added, broadcasting: 3\nI0511 20:29:48.866571 4185 log.go:172] (0xc00078e0b0) Reply frame received for 3\nI0511 20:29:48.866603 4185 log.go:172] (0xc00078e0b0) (0xc0003714a0) Create stream\nI0511 20:29:48.866614 4185 log.go:172] (0xc00078e0b0) (0xc0003714a0) Stream added, broadcasting: 5\nI0511 20:29:48.867287 4185 log.go:172] (0xc00078e0b0) Reply frame received for 5\nI0511 20:29:48.931219 4185 log.go:172] (0xc00078e0b0) Data frame received for 5\nI0511 20:29:48.931244 4185 log.go:172] (0xc0003714a0) (5) Data frame handling\nI0511 20:29:48.931261 4185 log.go:172] (0xc0003714a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32186\nI0511 20:29:48.931518 4185 log.go:172] (0xc00078e0b0) Data frame received for 5\nI0511 20:29:48.931540 4185 log.go:172] (0xc0003714a0) (5) Data frame handling\nI0511 20:29:48.931558 4185 log.go:172] (0xc0003714a0) (5) Data frame sent\nConnection to 172.17.0.13 32186 port [tcp/32186] succeeded!\nI0511 20:29:48.931963 4185 log.go:172] (0xc00078e0b0) Data frame received for 3\nI0511 20:29:48.931978 4185 log.go:172] (0xc00024c6e0) (3) Data frame handling\nI0511 20:29:48.932042 4185 log.go:172] (0xc00078e0b0) Data frame received for 5\nI0511 20:29:48.932058 4185 log.go:172] (0xc0003714a0) (5) Data frame handling\nI0511 20:29:48.933535 4185 log.go:172] (0xc00078e0b0) Data frame received for 1\nI0511 20:29:48.933548 4185 log.go:172] (0xc000370e60) (1) Data frame handling\nI0511 20:29:48.933559 4185 log.go:172] (0xc000370e60) (1) Data frame sent\nI0511 20:29:48.933570 4185 log.go:172] (0xc00078e0b0) (0xc000370e60) Stream removed, broadcasting: 1\nI0511 20:29:48.933581 4185 log.go:172] (0xc00078e0b0) Go away received\nI0511 20:29:48.933928 4185 log.go:172] (0xc00078e0b0) (0xc000370e60) Stream removed, broadcasting: 1\nI0511 20:29:48.933946 4185 log.go:172] (0xc00078e0b0) (0xc00024c6e0) Stream removed, broadcasting: 3\nI0511 20:29:48.933954 4185 log.go:172] (0xc00078e0b0) (0xc0003714a0) Stream removed, broadcasting: 5\n" May 11 20:29:48.937: INFO: stdout: "" May 11 20:29:48.937: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9577 execpod-affinity67dxl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32186' May 11 20:29:49.110: INFO: stderr: "I0511 20:29:49.048018 4205 log.go:172] (0xc0009813f0) (0xc0006e0f00) Create stream\nI0511 20:29:49.048057 4205 log.go:172] (0xc0009813f0) (0xc0006e0f00) Stream added, broadcasting: 1\nI0511 20:29:49.051210 4205 log.go:172] (0xc0009813f0) Reply frame received for 1\nI0511 20:29:49.051239 4205 log.go:172] (0xc0009813f0) (0xc0000ddb80) Create stream\nI0511 20:29:49.051248 4205 log.go:172] (0xc0009813f0) (0xc0000ddb80) Stream added, broadcasting: 3\nI0511 20:29:49.051939 4205 log.go:172] (0xc0009813f0) Reply frame received for 3\nI0511 20:29:49.051963 4205 log.go:172] (0xc0009813f0) (0xc0006c3b80) Create stream\nI0511 20:29:49.051983 4205 log.go:172] (0xc0009813f0) (0xc0006c3b80) Stream added, broadcasting: 5\nI0511 20:29:49.052655 4205 log.go:172] (0xc0009813f0) Reply frame received for 5\nI0511 20:29:49.106257 4205 log.go:172] (0xc0009813f0) Data frame received for 3\nI0511 20:29:49.106303 4205 log.go:172] (0xc0000ddb80) (3) Data frame handling\nI0511 20:29:49.106332 4205 log.go:172] (0xc0009813f0) Data frame received for 5\nI0511 20:29:49.106351 4205 log.go:172] (0xc0006c3b80) (5) Data frame handling\nI0511 20:29:49.106364 4205 log.go:172] (0xc0006c3b80) (5) Data frame sent\nI0511 20:29:49.106377 4205 log.go:172] (0xc0009813f0) Data frame received for 5\nI0511 20:29:49.106388 4205 log.go:172] (0xc0006c3b80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32186\nConnection to 172.17.0.12 32186 port [tcp/32186] succeeded!\nI0511 20:29:49.107341 4205 log.go:172] (0xc0009813f0) Data frame received for 1\nI0511 20:29:49.107379 4205 log.go:172] (0xc0006e0f00) (1) Data frame handling\nI0511 20:29:49.107397 4205 log.go:172] (0xc0006e0f00) (1) Data frame sent\nI0511 20:29:49.107465 4205 log.go:172] (0xc0009813f0) (0xc0006e0f00) Stream removed, broadcasting: 1\nI0511 20:29:49.107504 4205 log.go:172] (0xc0009813f0) Go away received\nI0511 20:29:49.107872 4205 log.go:172] (0xc0009813f0) (0xc0006e0f00) Stream removed, broadcasting: 1\nI0511 20:29:49.107889 4205 log.go:172] (0xc0009813f0) (0xc0000ddb80) Stream removed, broadcasting: 3\nI0511 20:29:49.107898 4205 log.go:172] (0xc0009813f0) (0xc0006c3b80) Stream removed, broadcasting: 5\n" May 11 20:29:49.111: INFO: stdout: "" May 11 20:29:49.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9577 execpod-affinity67dxl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32186/ ; done' May 11 20:29:49.396: INFO: stderr: "I0511 20:29:49.233300 4223 log.go:172] (0xc000418580) (0xc0006a1a40) Create stream\nI0511 20:29:49.233354 4223 log.go:172] (0xc000418580) (0xc0006a1a40) Stream added, broadcasting: 1\nI0511 20:29:49.235712 4223 log.go:172] (0xc000418580) Reply frame received for 1\nI0511 20:29:49.235745 4223 log.go:172] (0xc000418580) (0xc0004dd860) Create stream\nI0511 20:29:49.235758 4223 log.go:172] (0xc000418580) (0xc0004dd860) Stream added, broadcasting: 3\nI0511 20:29:49.236612 4223 log.go:172] (0xc000418580) Reply frame received for 3\nI0511 20:29:49.236649 4223 log.go:172] (0xc000418580) (0xc00023a8c0) Create stream\nI0511 20:29:49.236661 4223 log.go:172] (0xc000418580) (0xc00023a8c0) Stream added, broadcasting: 5\nI0511 20:29:49.237653 4223 log.go:172] (0xc000418580) Reply frame received for 5\nI0511 20:29:49.287312 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.287343 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.287352 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.287393 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.287425 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.287456 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.294383 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.294414 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.294440 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.294764 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.294781 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.294788 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.294852 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.294866 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.294872 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.300555 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.300573 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.300599 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.301097 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.301215 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.301232 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0511 20:29:49.301315 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.301329 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.301340 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n http://172.17.0.13:32186/\nI0511 20:29:49.301352 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.301367 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.301376 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.306825 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.306866 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.306891 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.307675 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.307703 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.307713 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.307745 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.307781 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.307817 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.312171 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.312191 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.312209 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.312676 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.312705 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.312718 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.312738 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.312759 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.312772 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.317742 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.317764 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.317788 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.318204 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.318226 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.318237 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.318251 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.318261 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.318269 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.325642 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.325658 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.325664 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.325675 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.325684 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.325691 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.325698 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.325708 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.325723 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.330778 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.330797 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.330810 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.331597 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.331618 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.331625 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.331635 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.331640 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.331646 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.339455 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.339489 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.339516 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.340399 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.340471 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.340569 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.340719 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.340790 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.340876 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.348136 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.348219 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.348247 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.348881 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.348911 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.348921 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.348946 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.349029 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.349066 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.353882 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.353909 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.353928 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.354436 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.354460 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.354468 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.354483 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.354506 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.354523 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.359446 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.359465 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.359476 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.360060 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.360080 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.360096 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.360110 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.360136 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.360155 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.364363 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.364374 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.364384 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.364768 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.364782 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.364788 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.364799 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.364807 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.364814 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.368858 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.368878 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.368892 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.369645 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.369679 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.369691 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/I0511 20:29:49.369705 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.369719 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.369737 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.369932 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.369944 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.369951 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n\nI0511 20:29:49.375315 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.375333 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.375349 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.376384 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.376474 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.376565 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\nI0511 20:29:49.376605 4223 log.go:172] (0xc000418580) Data frame received for 5\n+ I0511 20:29:49.376624 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.376646 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\nI0511 20:29:49.376657 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.376662 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\necho\nI0511 20:29:49.376673 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\nI0511 20:29:49.377063 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.377080 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.377091 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.377101 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.377107 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.377264 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.384164 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.384179 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.384191 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.384618 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.384633 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.384640 4223 log.go:172] (0xc00023a8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32186/\nI0511 20:29:49.384706 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.384724 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.384750 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.388944 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.388960 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.388971 4223 log.go:172] (0xc0004dd860) (3) Data frame sent\nI0511 20:29:49.389787 4223 log.go:172] (0xc000418580) Data frame received for 5\nI0511 20:29:49.389810 4223 log.go:172] (0xc00023a8c0) (5) Data frame handling\nI0511 20:29:49.390078 4223 log.go:172] (0xc000418580) Data frame received for 3\nI0511 20:29:49.390107 4223 log.go:172] (0xc0004dd860) (3) Data frame handling\nI0511 20:29:49.391494 4223 log.go:172] (0xc000418580) Data frame received for 1\nI0511 20:29:49.391547 4223 log.go:172] (0xc0006a1a40) (1) Data frame handling\nI0511 20:29:49.391569 4223 log.go:172] (0xc0006a1a40) (1) Data frame sent\nI0511 20:29:49.391584 4223 log.go:172] (0xc000418580) (0xc0006a1a40) Stream removed, broadcasting: 1\nI0511 20:29:49.391639 4223 log.go:172] (0xc000418580) Go away received\nI0511 20:29:49.391909 4223 log.go:172] (0xc000418580) (0xc0006a1a40) Stream removed, broadcasting: 1\nI0511 20:29:49.391928 4223 log.go:172] (0xc000418580) (0xc0004dd860) Stream removed, broadcasting: 3\nI0511 20:29:49.391937 4223 log.go:172] (0xc000418580) (0xc00023a8c0) Stream removed, broadcasting: 5\n" May 11 20:29:49.396: INFO: stdout: "\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b\naffinity-nodeport-xzb2b" May 11 20:29:49.396: INFO: Received response from host: May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Received response from host: affinity-nodeport-xzb2b May 11 20:29:49.396: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-9577, will wait for the garbage collector to delete the pods May 11 20:29:50.133: INFO: Deleting ReplicationController affinity-nodeport took: 559.10114ms May 11 20:29:50.733: INFO: Terminating ReplicationController affinity-nodeport pods took: 600.180748ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:30:06.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9577" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:41.554 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":258,"skipped":4176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:30:06.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-841b1f16-bac8-4003-88b3-f4fc3bc84a32 STEP: Creating a pod to test consume configMaps May 11 20:30:06.616: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3" in namespace "projected-5877" to be "Succeeded or Failed" May 11 20:30:06.671: INFO: Pod "pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 54.307272ms May 11 20:30:08.890: INFO: Pod "pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273106416s May 11 20:30:10.944: INFO: Pod "pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327007071s May 11 20:30:13.054: INFO: Pod "pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437391947s STEP: Saw pod success May 11 20:30:13.054: INFO: Pod "pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3" satisfied condition "Succeeded or Failed" May 11 20:30:13.075: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3 container projected-configmap-volume-test: STEP: delete the pod May 11 20:30:13.508: INFO: Waiting for pod pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3 to disappear May 11 20:30:13.513: INFO: Pod pod-projected-configmaps-fee028c2-ef71-46fa-82f1-15072685a6d3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:30:13.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5877" for this suite. • [SLOW TEST:7.049 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4224,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:30:13.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4251 STEP: creating replication controller nodeport-test in namespace services-4251 I0511 20:30:14.117434 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4251, replica count: 2 I0511 20:30:17.167871 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:30:20.168072 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 20:30:20.168: INFO: Creating new exec pod May 11 20:30:25.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4251 execpodck9hx -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 11 20:30:25.560: INFO: stderr: "I0511 20:30:25.392551 4243 log.go:172] (0xc000a7cf20) (0xc000732fa0) Create stream\nI0511 20:30:25.392616 4243 log.go:172] (0xc000a7cf20) (0xc000732fa0) Stream added, broadcasting: 1\nI0511 20:30:25.396120 4243 log.go:172] (0xc000a7cf20) Reply frame received for 1\nI0511 20:30:25.396157 4243 log.go:172] (0xc000a7cf20) (0xc0006edc20) Create stream\nI0511 20:30:25.396173 4243 log.go:172] (0xc000a7cf20) (0xc0006edc20) Stream added, broadcasting: 3\nI0511 20:30:25.396837 4243 log.go:172] (0xc000a7cf20) Reply frame received for 3\nI0511 20:30:25.396866 4243 log.go:172] (0xc000a7cf20) (0xc0006bed20) Create stream\nI0511 20:30:25.396879 4243 log.go:172] (0xc000a7cf20) (0xc0006bed20) Stream added, broadcasting: 5\nI0511 20:30:25.397850 4243 log.go:172] (0xc000a7cf20) Reply frame received for 5\nI0511 20:30:25.452857 4243 log.go:172] (0xc000a7cf20) Data frame received for 5\nI0511 20:30:25.452869 4243 log.go:172] (0xc0006bed20) (5) Data frame handling\nI0511 20:30:25.452875 4243 log.go:172] (0xc0006bed20) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0511 20:30:25.553401 4243 log.go:172] (0xc000a7cf20) Data frame received for 5\nI0511 20:30:25.553427 4243 log.go:172] (0xc0006bed20) (5) Data frame handling\nI0511 20:30:25.553445 4243 log.go:172] (0xc0006bed20) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0511 20:30:25.553551 4243 log.go:172] (0xc000a7cf20) Data frame received for 5\nI0511 20:30:25.553575 4243 log.go:172] (0xc0006bed20) (5) Data frame handling\nI0511 20:30:25.554135 4243 log.go:172] (0xc000a7cf20) Data frame received for 3\nI0511 20:30:25.554157 4243 log.go:172] (0xc0006edc20) (3) Data frame handling\nI0511 20:30:25.555445 4243 log.go:172] (0xc000a7cf20) Data frame received for 1\nI0511 20:30:25.555471 4243 log.go:172] (0xc000732fa0) (1) Data frame handling\nI0511 20:30:25.555496 4243 log.go:172] (0xc000732fa0) (1) Data frame sent\nI0511 20:30:25.555524 4243 log.go:172] (0xc000a7cf20) (0xc000732fa0) Stream removed, broadcasting: 1\nI0511 20:30:25.555620 4243 log.go:172] (0xc000a7cf20) Go away received\nI0511 20:30:25.555988 4243 log.go:172] (0xc000a7cf20) (0xc000732fa0) Stream removed, broadcasting: 1\nI0511 20:30:25.556011 4243 log.go:172] (0xc000a7cf20) (0xc0006edc20) Stream removed, broadcasting: 3\nI0511 20:30:25.556021 4243 log.go:172] (0xc000a7cf20) (0xc0006bed20) Stream removed, broadcasting: 5\n" May 11 20:30:25.560: INFO: stdout: "" May 11 20:30:25.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4251 execpodck9hx -- /bin/sh -x -c nc -zv -t -w 2 10.99.112.228 80' May 11 20:30:25.751: INFO: stderr: "I0511 20:30:25.678999 4264 log.go:172] (0xc0009e8160) (0xc000390aa0) Create stream\nI0511 20:30:25.679038 4264 log.go:172] (0xc0009e8160) (0xc000390aa0) Stream added, broadcasting: 1\nI0511 20:30:25.681234 4264 log.go:172] (0xc0009e8160) Reply frame received for 1\nI0511 20:30:25.681272 4264 log.go:172] (0xc0009e8160) (0xc000670500) Create stream\nI0511 20:30:25.681288 4264 log.go:172] (0xc0009e8160) (0xc000670500) Stream added, broadcasting: 3\nI0511 20:30:25.682099 4264 log.go:172] (0xc0009e8160) Reply frame received for 3\nI0511 20:30:25.682136 4264 log.go:172] (0xc0009e8160) (0xc000390e60) Create stream\nI0511 20:30:25.682152 4264 log.go:172] (0xc0009e8160) (0xc000390e60) Stream added, broadcasting: 5\nI0511 20:30:25.683029 4264 log.go:172] (0xc0009e8160) Reply frame received for 5\nI0511 20:30:25.745506 4264 log.go:172] (0xc0009e8160) Data frame received for 3\nI0511 20:30:25.745541 4264 log.go:172] (0xc000670500) (3) Data frame handling\nI0511 20:30:25.745561 4264 log.go:172] (0xc0009e8160) Data frame received for 5\nI0511 20:30:25.745571 4264 log.go:172] (0xc000390e60) (5) Data frame handling\nI0511 20:30:25.745584 4264 log.go:172] (0xc000390e60) (5) Data frame sent\nI0511 20:30:25.745594 4264 log.go:172] (0xc0009e8160) Data frame received for 5\nI0511 20:30:25.745608 4264 log.go:172] (0xc000390e60) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.112.228 80\nConnection to 10.99.112.228 80 port [tcp/http] succeeded!\nI0511 20:30:25.747034 4264 log.go:172] (0xc0009e8160) Data frame received for 1\nI0511 20:30:25.747069 4264 log.go:172] (0xc000390aa0) (1) Data frame handling\nI0511 20:30:25.747090 4264 log.go:172] (0xc000390aa0) (1) Data frame sent\nI0511 20:30:25.747111 4264 log.go:172] (0xc0009e8160) (0xc000390aa0) Stream removed, broadcasting: 1\nI0511 20:30:25.747125 4264 log.go:172] (0xc0009e8160) Go away received\nI0511 20:30:25.747478 4264 log.go:172] (0xc0009e8160) (0xc000390aa0) Stream removed, broadcasting: 1\nI0511 20:30:25.747503 4264 log.go:172] (0xc0009e8160) (0xc000670500) Stream removed, broadcasting: 3\nI0511 20:30:25.747519 4264 log.go:172] (0xc0009e8160) (0xc000390e60) Stream removed, broadcasting: 5\n" May 11 20:30:25.751: INFO: stdout: "" May 11 20:30:25.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4251 execpodck9hx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30994' May 11 20:30:26.611: INFO: stderr: "I0511 20:30:26.530030 4285 log.go:172] (0xc00096f550) (0xc0006d8f00) Create stream\nI0511 20:30:26.530071 4285 log.go:172] (0xc00096f550) (0xc0006d8f00) Stream added, broadcasting: 1\nI0511 20:30:26.533568 4285 log.go:172] (0xc00096f550) Reply frame received for 1\nI0511 20:30:26.533605 4285 log.go:172] (0xc00096f550) (0xc0006adb80) Create stream\nI0511 20:30:26.533621 4285 log.go:172] (0xc00096f550) (0xc0006adb80) Stream added, broadcasting: 3\nI0511 20:30:26.534337 4285 log.go:172] (0xc00096f550) Reply frame received for 3\nI0511 20:30:26.534369 4285 log.go:172] (0xc00096f550) (0xc00068cc80) Create stream\nI0511 20:30:26.534378 4285 log.go:172] (0xc00096f550) (0xc00068cc80) Stream added, broadcasting: 5\nI0511 20:30:26.535092 4285 log.go:172] (0xc00096f550) Reply frame received for 5\nI0511 20:30:26.605749 4285 log.go:172] (0xc00096f550) Data frame received for 3\nI0511 20:30:26.605791 4285 log.go:172] (0xc0006adb80) (3) Data frame handling\nI0511 20:30:26.605814 4285 log.go:172] (0xc00096f550) Data frame received for 5\nI0511 20:30:26.605826 4285 log.go:172] (0xc00068cc80) (5) Data frame handling\nI0511 20:30:26.605836 4285 log.go:172] (0xc00068cc80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30994\nConnection to 172.17.0.13 30994 port [tcp/30994] succeeded!\nI0511 20:30:26.605987 4285 log.go:172] (0xc00096f550) Data frame received for 5\nI0511 20:30:26.606000 4285 log.go:172] (0xc00068cc80) (5) Data frame handling\nI0511 20:30:26.607289 4285 log.go:172] (0xc00096f550) Data frame received for 1\nI0511 20:30:26.607353 4285 log.go:172] (0xc0006d8f00) (1) Data frame handling\nI0511 20:30:26.607396 4285 log.go:172] (0xc0006d8f00) (1) Data frame sent\nI0511 20:30:26.607415 4285 log.go:172] (0xc00096f550) (0xc0006d8f00) Stream removed, broadcasting: 1\nI0511 20:30:26.607431 4285 log.go:172] (0xc00096f550) Go away received\nI0511 20:30:26.607845 4285 log.go:172] (0xc00096f550) (0xc0006d8f00) Stream removed, broadcasting: 1\nI0511 20:30:26.607872 4285 log.go:172] (0xc00096f550) (0xc0006adb80) Stream removed, broadcasting: 3\nI0511 20:30:26.607890 4285 log.go:172] (0xc00096f550) (0xc00068cc80) Stream removed, broadcasting: 5\n" May 11 20:30:26.612: INFO: stdout: "" May 11 20:30:26.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4251 execpodck9hx -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30994' May 11 20:30:27.092: INFO: stderr: "I0511 20:30:27.012412 4303 log.go:172] (0xc0000e8370) (0xc00015f7c0) Create stream\nI0511 20:30:27.012472 4303 log.go:172] (0xc0000e8370) (0xc00015f7c0) Stream added, broadcasting: 1\nI0511 20:30:27.014412 4303 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0511 20:30:27.014506 4303 log.go:172] (0xc0000e8370) (0xc00015ff40) Create stream\nI0511 20:30:27.014547 4303 log.go:172] (0xc0000e8370) (0xc00015ff40) Stream added, broadcasting: 3\nI0511 20:30:27.015318 4303 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0511 20:30:27.015349 4303 log.go:172] (0xc0000e8370) (0xc00036df40) Create stream\nI0511 20:30:27.015364 4303 log.go:172] (0xc0000e8370) (0xc00036df40) Stream added, broadcasting: 5\nI0511 20:30:27.016175 4303 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0511 20:30:27.083917 4303 log.go:172] (0xc0000e8370) Data frame received for 5\nI0511 20:30:27.083962 4303 log.go:172] (0xc00036df40) (5) Data frame handling\nI0511 20:30:27.083990 4303 log.go:172] (0xc00036df40) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30994\nI0511 20:30:27.085684 4303 log.go:172] (0xc0000e8370) Data frame received for 5\nI0511 20:30:27.085720 4303 log.go:172] (0xc00036df40) (5) Data frame handling\nI0511 20:30:27.085746 4303 log.go:172] (0xc00036df40) (5) Data frame sent\nConnection to 172.17.0.12 30994 port [tcp/30994] succeeded!\nI0511 20:30:27.085885 4303 log.go:172] (0xc0000e8370) Data frame received for 3\nI0511 20:30:27.085908 4303 log.go:172] (0xc00015ff40) (3) Data frame handling\nI0511 20:30:27.086052 4303 log.go:172] (0xc0000e8370) Data frame received for 5\nI0511 20:30:27.086140 4303 log.go:172] (0xc00036df40) (5) Data frame handling\nI0511 20:30:27.088007 4303 log.go:172] (0xc0000e8370) Data frame received for 1\nI0511 20:30:27.088073 4303 log.go:172] (0xc00015f7c0) (1) Data frame handling\nI0511 20:30:27.088093 4303 log.go:172] (0xc00015f7c0) (1) Data frame sent\nI0511 20:30:27.088114 4303 log.go:172] (0xc0000e8370) (0xc00015f7c0) Stream removed, broadcasting: 1\nI0511 20:30:27.088133 4303 log.go:172] (0xc0000e8370) Go away received\nI0511 20:30:27.088497 4303 log.go:172] (0xc0000e8370) (0xc00015f7c0) Stream removed, broadcasting: 1\nI0511 20:30:27.088516 4303 log.go:172] (0xc0000e8370) (0xc00015ff40) Stream removed, broadcasting: 3\nI0511 20:30:27.088526 4303 log.go:172] (0xc0000e8370) (0xc00036df40) Stream removed, broadcasting: 5\n" May 11 20:30:27.092: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:30:27.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4251" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:13.628 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":260,"skipped":4230,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:30:27.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:30:27.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 11 20:30:27.870: INFO: stderr: "" May 11 20:30:27.870: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:30:27.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5059" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":261,"skipped":4241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:30:28.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2a40d9c4-4d99-4fec-bd51-7e9e41a0779b STEP: Creating a pod to test consume configMaps May 11 20:30:28.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-6794c81d-570c-4790-8817-7a490c638703" in namespace "configmap-1139" to be "Succeeded or Failed" May 11 20:30:28.811: INFO: Pod "pod-configmaps-6794c81d-570c-4790-8817-7a490c638703": Phase="Pending", Reason="", readiness=false. Elapsed: 147.617866ms May 11 20:30:30.815: INFO: Pod "pod-configmaps-6794c81d-570c-4790-8817-7a490c638703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151561708s May 11 20:30:32.826: INFO: Pod "pod-configmaps-6794c81d-570c-4790-8817-7a490c638703": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162653347s May 11 20:30:35.009: INFO: Pod "pod-configmaps-6794c81d-570c-4790-8817-7a490c638703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345661185s STEP: Saw pod success May 11 20:30:35.009: INFO: Pod "pod-configmaps-6794c81d-570c-4790-8817-7a490c638703" satisfied condition "Succeeded or Failed" May 11 20:30:35.011: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6794c81d-570c-4790-8817-7a490c638703 container configmap-volume-test: STEP: delete the pod May 11 20:30:35.991: INFO: Waiting for pod pod-configmaps-6794c81d-570c-4790-8817-7a490c638703 to disappear May 11 20:30:36.004: INFO: Pod pod-configmaps-6794c81d-570c-4790-8817-7a490c638703 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:30:36.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1139" for this suite. • [SLOW TEST:8.178 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":262,"skipped":4281,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:30:36.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 20:30:41.742: INFO: Successfully updated pod "pod-update-41d0b38d-b334-4aca-9c5a-5ed186683442" STEP: verifying the updated pod is in kubernetes May 11 20:30:41.755: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:30:41.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9423" for this suite. • [SLOW TEST:5.362 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":263,"skipped":4303,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:30:41.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-2896/configmap-test-767ec917-c17c-4aaa-902c-7d8ffa8484e8 STEP: Creating a pod to test consume configMaps May 11 20:30:41.910: INFO: Waiting up to 5m0s for pod "pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85" in namespace "configmap-2896" to be "Succeeded or Failed" May 11 20:30:41.922: INFO: Pod "pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85": Phase="Pending", Reason="", readiness=false. Elapsed: 11.113047ms May 11 20:30:43.926: INFO: Pod "pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015875166s May 11 20:30:45.987: INFO: Pod "pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076196644s May 11 20:30:47.990: INFO: Pod "pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079940274s STEP: Saw pod success May 11 20:30:47.990: INFO: Pod "pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85" satisfied condition "Succeeded or Failed" May 11 20:30:47.994: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85 container env-test: STEP: delete the pod May 11 20:30:48.324: INFO: Waiting for pod pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85 to disappear May 11 20:30:48.452: INFO: Pod pod-configmaps-36ea7104-b694-468d-ac12-00075f8d4a85 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:30:48.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2896" for this suite. • [SLOW TEST:6.699 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:30:48.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-8cfedc0b-f0a8-4427-a17a-a3c1170141fe STEP: Creating a pod to test consume configMaps May 11 20:30:49.307: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f" in namespace "configmap-3829" to be "Succeeded or Failed" May 11 20:30:49.830: INFO: Pod "pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f": Phase="Pending", Reason="", readiness=false. Elapsed: 523.242558ms May 11 20:30:51.834: INFO: Pod "pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.526529237s May 11 20:30:53.844: INFO: Pod "pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537050404s May 11 20:30:56.010: INFO: Pod "pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.702603955s May 11 20:30:58.202: INFO: Pod "pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.894635556s STEP: Saw pod success May 11 20:30:58.202: INFO: Pod "pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f" satisfied condition "Succeeded or Failed" May 11 20:30:58.204: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f container configmap-volume-test: STEP: delete the pod May 11 20:30:58.465: INFO: Waiting for pod pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f to disappear May 11 20:30:58.548: INFO: Pod pod-configmaps-c3efac5d-b7dc-402a-878b-85b078e9a75f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:30:58.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3829" for this suite. • [SLOW TEST:10.095 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":265,"skipped":4366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:30:58.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 20:30:58.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62" in namespace "projected-4511" to be "Succeeded or Failed" May 11 20:30:58.795: INFO: Pod "downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62": Phase="Pending", Reason="", readiness=false. Elapsed: 5.782946ms May 11 20:31:00.859: INFO: Pod "downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069885451s May 11 20:31:02.864: INFO: Pod "downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62": Phase="Running", Reason="", readiness=true. Elapsed: 4.074624933s May 11 20:31:04.869: INFO: Pod "downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07945405s STEP: Saw pod success May 11 20:31:04.869: INFO: Pod "downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62" satisfied condition "Succeeded or Failed" May 11 20:31:04.872: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62 container client-container: STEP: delete the pod May 11 20:31:04.910: INFO: Waiting for pod downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62 to disappear May 11 20:31:05.039: INFO: Pod downwardapi-volume-f2f67728-08db-4909-a5d6-1718b0433c62 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:31:05.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4511" for this suite. • [SLOW TEST:6.530 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":266,"skipped":4408,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:31:05.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:31:05.467: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ac2b3669-3fe4-4248-bfda-a1234f392cc0" in namespace "security-context-test-7322" to be "Succeeded or Failed" May 11 20:31:05.474: INFO: Pod "busybox-readonly-false-ac2b3669-3fe4-4248-bfda-a1234f392cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.036405ms May 11 20:31:07.488: INFO: Pod "busybox-readonly-false-ac2b3669-3fe4-4248-bfda-a1234f392cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021353244s May 11 20:31:09.492: INFO: Pod "busybox-readonly-false-ac2b3669-3fe4-4248-bfda-a1234f392cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025557893s May 11 20:31:09.492: INFO: Pod "busybox-readonly-false-ac2b3669-3fe4-4248-bfda-a1234f392cc0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:31:09.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7322" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:31:09.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 20:31:09.716: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:09.809: INFO: Number of nodes with available pods: 0 May 11 20:31:09.809: INFO: Node latest-worker is running more than one daemon pod May 11 20:31:10.814: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:10.832: INFO: Number of nodes with available pods: 0 May 11 20:31:10.832: INFO: Node latest-worker is running more than one daemon pod May 11 20:31:11.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:11.823: INFO: Number of nodes with available pods: 0 May 11 20:31:11.823: INFO: Node latest-worker is running more than one daemon pod May 11 20:31:12.824: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:12.828: INFO: Number of nodes with available pods: 0 May 11 20:31:12.828: INFO: Node latest-worker is running more than one daemon pod May 11 20:31:13.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:13.894: INFO: Number of nodes with available pods: 0 May 11 20:31:13.894: INFO: Node latest-worker is running more than one daemon pod May 11 20:31:14.836: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:14.839: INFO: Number of nodes with available pods: 1 May 11 20:31:14.839: INFO: Node latest-worker is running more than one daemon pod May 11 20:31:15.814: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:15.817: INFO: Number of nodes with available pods: 2 May 11 20:31:15.817: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 11 20:31:15.834: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:15.836: INFO: Number of nodes with available pods: 1 May 11 20:31:15.836: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:16.842: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:16.847: INFO: Number of nodes with available pods: 1 May 11 20:31:16.847: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:17.842: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:17.845: INFO: Number of nodes with available pods: 1 May 11 20:31:17.845: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:18.841: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:18.844: INFO: Number of nodes with available pods: 1 May 11 20:31:18.844: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:19.841: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:19.844: INFO: Number of nodes with available pods: 1 May 11 20:31:19.844: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:20.842: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:20.846: INFO: Number of nodes with available pods: 1 May 11 20:31:20.846: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:21.840: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:21.844: INFO: Number of nodes with available pods: 1 May 11 20:31:21.844: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:22.841: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:22.844: INFO: Number of nodes with available pods: 1 May 11 20:31:22.844: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:23.841: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:23.844: INFO: Number of nodes with available pods: 1 May 11 20:31:23.844: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:24.840: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:24.992: INFO: Number of nodes with available pods: 1 May 11 20:31:24.992: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:25.841: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:25.844: INFO: Number of nodes with available pods: 1 May 11 20:31:25.844: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:26.840: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:26.932: INFO: Number of nodes with available pods: 1 May 11 20:31:26.932: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:27.869: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:27.871: INFO: Number of nodes with available pods: 1 May 11 20:31:27.871: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:28.885: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:28.888: INFO: Number of nodes with available pods: 1 May 11 20:31:28.888: INFO: Node latest-worker2 is running more than one daemon pod May 11 20:31:29.980: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:31:29.983: INFO: Number of nodes with available pods: 2 May 11 20:31:29.983: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-307, will wait for the garbage collector to delete the pods May 11 20:31:30.167: INFO: Deleting DaemonSet.extensions daemon-set took: 91.248662ms May 11 20:31:30.668: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.234578ms May 11 20:31:45.356: INFO: Number of nodes with available pods: 0 May 11 20:31:45.356: INFO: Number of running nodes: 0, number of available pods: 0 May 11 20:31:45.358: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-307/daemonsets","resourceVersion":"3565228"},"items":null} May 11 20:31:45.360: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-307/pods","resourceVersion":"3565228"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:31:45.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-307" for this suite. • [SLOW TEST:35.869 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":268,"skipped":4470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:31:45.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 11 20:31:45.788: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 11 20:31:45.828: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 11 20:31:45.828: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 11 20:31:45.932: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 11 20:31:45.932: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 11 20:31:45.966: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 11 20:31:45.966: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 11 20:31:54.127: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:31:54.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3277" for this suite. • [SLOW TEST:9.162 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":269,"skipped":4497,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:31:54.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:32:03.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9914" for this suite. • [SLOW TEST:9.597 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4505,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:32:04.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 20:32:04.535: INFO: Waiting up to 5m0s for pod "pod-a218e35f-14b5-4b2b-88cf-5b773b04209b" in namespace "emptydir-6757" to be "Succeeded or Failed" May 11 20:32:04.781: INFO: Pod "pod-a218e35f-14b5-4b2b-88cf-5b773b04209b": Phase="Pending", Reason="", readiness=false. Elapsed: 246.680932ms May 11 20:32:06.944: INFO: Pod "pod-a218e35f-14b5-4b2b-88cf-5b773b04209b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409608796s May 11 20:32:08.956: INFO: Pod "pod-a218e35f-14b5-4b2b-88cf-5b773b04209b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420962426s May 11 20:32:10.959: INFO: Pod "pod-a218e35f-14b5-4b2b-88cf-5b773b04209b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.423826095s STEP: Saw pod success May 11 20:32:10.959: INFO: Pod "pod-a218e35f-14b5-4b2b-88cf-5b773b04209b" satisfied condition "Succeeded or Failed" May 11 20:32:10.961: INFO: Trying to get logs from node latest-worker2 pod pod-a218e35f-14b5-4b2b-88cf-5b773b04209b container test-container: STEP: delete the pod May 11 20:32:10.991: INFO: Waiting for pod pod-a218e35f-14b5-4b2b-88cf-5b773b04209b to disappear May 11 20:32:11.087: INFO: Pod pod-a218e35f-14b5-4b2b-88cf-5b773b04209b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:32:11.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6757" for this suite. • [SLOW TEST:6.963 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":271,"skipped":4509,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:32:11.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-1334d059-4b80-47ee-b1f2-901b902af871 STEP: Creating secret with name s-test-opt-upd-1f39577a-5d75-4dbc-a685-9f9f919ff56f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1334d059-4b80-47ee-b1f2-901b902af871 STEP: Updating secret s-test-opt-upd-1f39577a-5d75-4dbc-a685-9f9f919ff56f STEP: Creating secret with name s-test-opt-create-f162abd6-7204-4a3e-bbc4-9e241e428a4b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:33:30.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3547" for this suite. • [SLOW TEST:79.804 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":272,"skipped":4517,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:33:30.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1505 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 20:33:30.969: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 11 20:33:31.159: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 20:33:33.298: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 20:33:35.208: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 11 20:33:37.243: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:33:39.370: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:33:41.164: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:33:43.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:33:45.162: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:33:47.163: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:33:49.163: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:33:51.162: INFO: The status of Pod netserver-0 is Running (Ready = false) May 11 20:33:53.232: INFO: The status of Pod netserver-0 is Running (Ready = true) May 11 20:33:53.237: INFO: The status of Pod netserver-1 is Running (Ready = false) May 11 20:33:55.241: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 11 20:34:01.320: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.254:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1505 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:34:01.320: INFO: >>> kubeConfig: /root/.kube/config I0511 20:34:01.353812 7 log.go:172] (0xc002ea6370) (0xc001209ea0) Create stream I0511 20:34:01.353860 7 log.go:172] (0xc002ea6370) (0xc001209ea0) Stream added, broadcasting: 1 I0511 20:34:01.356072 7 log.go:172] (0xc002ea6370) Reply frame received for 1 I0511 20:34:01.356114 7 log.go:172] (0xc002ea6370) (0xc0020b80a0) Create stream I0511 20:34:01.356125 7 log.go:172] (0xc002ea6370) (0xc0020b80a0) Stream added, broadcasting: 3 I0511 20:34:01.357393 7 log.go:172] (0xc002ea6370) Reply frame received for 3 I0511 20:34:01.357418 7 log.go:172] (0xc002ea6370) (0xc001209f40) Create stream I0511 20:34:01.357425 7 log.go:172] (0xc002ea6370) (0xc001209f40) Stream added, broadcasting: 5 I0511 20:34:01.358494 7 log.go:172] (0xc002ea6370) Reply frame received for 5 I0511 20:34:01.456214 7 log.go:172] (0xc002ea6370) Data frame received for 3 I0511 20:34:01.456246 7 log.go:172] (0xc0020b80a0) (3) Data frame handling I0511 20:34:01.456265 7 log.go:172] (0xc0020b80a0) (3) Data frame sent I0511 20:34:01.456282 7 log.go:172] (0xc002ea6370) Data frame received for 3 I0511 20:34:01.456310 7 log.go:172] (0xc0020b80a0) (3) Data frame handling I0511 20:34:01.456549 7 log.go:172] (0xc002ea6370) Data frame received for 5 I0511 20:34:01.456583 7 log.go:172] (0xc001209f40) (5) Data frame handling I0511 20:34:01.458767 7 log.go:172] (0xc002ea6370) Data frame received for 1 I0511 20:34:01.458791 7 log.go:172] (0xc001209ea0) (1) Data frame handling I0511 20:34:01.458806 7 log.go:172] (0xc001209ea0) (1) Data frame sent I0511 20:34:01.458821 7 log.go:172] (0xc002ea6370) (0xc001209ea0) Stream removed, broadcasting: 1 I0511 20:34:01.458892 7 log.go:172] (0xc002ea6370) Go away received I0511 20:34:01.458936 7 log.go:172] (0xc002ea6370) (0xc001209ea0) Stream removed, broadcasting: 1 I0511 20:34:01.458969 7 log.go:172] (0xc002ea6370) (0xc0020b80a0) Stream removed, broadcasting: 3 I0511 20:34:01.458983 7 log.go:172] (0xc002ea6370) (0xc001209f40) Stream removed, broadcasting: 5 May 11 20:34:01.459: INFO: Found all expected endpoints: [netserver-0] May 11 20:34:01.462: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.154:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1505 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:34:01.462: INFO: >>> kubeConfig: /root/.kube/config I0511 20:34:01.491385 7 log.go:172] (0xc002ea69a0) (0xc001e78460) Create stream I0511 20:34:01.491411 7 log.go:172] (0xc002ea69a0) (0xc001e78460) Stream added, broadcasting: 1 I0511 20:34:01.492913 7 log.go:172] (0xc002ea69a0) Reply frame received for 1 I0511 20:34:01.492951 7 log.go:172] (0xc002ea69a0) (0xc001e78820) Create stream I0511 20:34:01.492973 7 log.go:172] (0xc002ea69a0) (0xc001e78820) Stream added, broadcasting: 3 I0511 20:34:01.493989 7 log.go:172] (0xc002ea69a0) Reply frame received for 3 I0511 20:34:01.494012 7 log.go:172] (0xc002ea69a0) (0xc001e78be0) Create stream I0511 20:34:01.494022 7 log.go:172] (0xc002ea69a0) (0xc001e78be0) Stream added, broadcasting: 5 I0511 20:34:01.494988 7 log.go:172] (0xc002ea69a0) Reply frame received for 5 I0511 20:34:01.555211 7 log.go:172] (0xc002ea69a0) Data frame received for 3 I0511 20:34:01.555288 7 log.go:172] (0xc001e78820) (3) Data frame handling I0511 20:34:01.555322 7 log.go:172] (0xc001e78820) (3) Data frame sent I0511 20:34:01.555344 7 log.go:172] (0xc002ea69a0) Data frame received for 3 I0511 20:34:01.555358 7 log.go:172] (0xc001e78820) (3) Data frame handling I0511 20:34:01.555385 7 log.go:172] (0xc002ea69a0) Data frame received for 5 I0511 20:34:01.555400 7 log.go:172] (0xc001e78be0) (5) Data frame handling I0511 20:34:01.557137 7 log.go:172] (0xc002ea69a0) Data frame received for 1 I0511 20:34:01.557253 7 log.go:172] (0xc001e78460) (1) Data frame handling I0511 20:34:01.557268 7 log.go:172] (0xc001e78460) (1) Data frame sent I0511 20:34:01.557449 7 log.go:172] (0xc002ea69a0) (0xc001e78460) Stream removed, broadcasting: 1 I0511 20:34:01.557504 7 log.go:172] (0xc002ea69a0) Go away received I0511 20:34:01.557629 7 log.go:172] (0xc002ea69a0) (0xc001e78460) Stream removed, broadcasting: 1 I0511 20:34:01.557659 7 log.go:172] (0xc002ea69a0) (0xc001e78820) Stream removed, broadcasting: 3 I0511 20:34:01.557674 7 log.go:172] (0xc002ea69a0) (0xc001e78be0) Stream removed, broadcasting: 5 May 11 20:34:01.557: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:34:01.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1505" for this suite. • [SLOW TEST:30.668 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4538,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:34:01.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 11 20:34:01.643: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 20:34:01.656: INFO: Waiting for terminating namespaces to be deleted... May 11 20:34:01.658: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 11 20:34:01.664: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 11 20:34:01.664: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 11 20:34:01.664: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 11 20:34:01.664: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 11 20:34:01.664: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 20:34:01.664: INFO: Container kindnet-cni ready: true, restart count 0 May 11 20:34:01.664: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 11 20:34:01.664: INFO: Container kube-proxy ready: true, restart count 0 May 11 20:34:01.664: INFO: netserver-0 from pod-network-test-1505 started at 2020-05-11 20:33:31 +0000 UTC (1 container statuses recorded) May 11 20:34:01.664: INFO: Container webserver ready: true, restart count 0 May 11 20:34:01.664: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 11 20:34:01.669: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 11 20:34:01.669: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 11 20:34:01.669: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 20:34:01.669: INFO: Container kindnet-cni ready: true, restart count 0 May 11 20:34:01.669: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 11 20:34:01.669: INFO: Container kube-proxy ready: true, restart count 0 May 11 20:34:01.669: INFO: host-test-container-pod from pod-network-test-1505 started at 2020-05-11 20:33:55 +0000 UTC (1 container statuses recorded) May 11 20:34:01.669: INFO: Container agnhost ready: true, restart count 0 May 11 20:34:01.669: INFO: netserver-1 from pod-network-test-1505 started at 2020-05-11 20:33:31 +0000 UTC (1 container statuses recorded) May 11 20:34:01.669: INFO: Container webserver ready: true, restart count 0 May 11 20:34:01.669: INFO: test-container-pod from pod-network-test-1505 started at 2020-05-11 20:33:55 +0000 UTC (1 container statuses recorded) May 11 20:34:01.669: INFO: Container webserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e139342b1f031], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e139345344e86], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:34:02.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9159" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":274,"skipped":4552,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:34:02.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-7c4b94c2-bd12-402d-9eef-cd8c4e3ff46a STEP: Creating secret with name s-test-opt-upd-219fa557-5a1c-4a48-a443-2ea906084c30 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7c4b94c2-bd12-402d-9eef-cd8c4e3ff46a STEP: Updating secret s-test-opt-upd-219fa557-5a1c-4a48-a443-2ea906084c30 STEP: Creating secret with name s-test-opt-create-1d855282-6fa4-4533-9687-6933434ff9a5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:35:44.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7178" for this suite. • [SLOW TEST:101.610 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4565,"failed":0} [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:35:44.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 11 20:37:45.460: INFO: Successfully updated pod "var-expansion-d43e1fce-a2a6-4ffe-a385-5433c73baea4" STEP: waiting for pod running STEP: deleting the pod gracefully May 11 20:37:49.670: INFO: Deleting pod "var-expansion-d43e1fce-a2a6-4ffe-a385-5433c73baea4" in namespace "var-expansion-6626" May 11 20:37:50.219: INFO: Wait up to 5m0s for pod "var-expansion-d43e1fce-a2a6-4ffe-a385-5433c73baea4" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:38:24.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6626" for this suite. • [SLOW TEST:160.357 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":276,"skipped":4565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:38:24.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 11 20:38:44.545: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:44.545: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:44.573487 7 log.go:172] (0xc00294e370) (0xc001ee8780) Create stream I0511 20:38:44.573510 7 log.go:172] (0xc00294e370) (0xc001ee8780) Stream added, broadcasting: 1 I0511 20:38:44.574836 7 log.go:172] (0xc00294e370) Reply frame received for 1 I0511 20:38:44.574856 7 log.go:172] (0xc00294e370) (0xc001209d60) Create stream I0511 20:38:44.574861 7 log.go:172] (0xc00294e370) (0xc001209d60) Stream added, broadcasting: 3 I0511 20:38:44.575688 7 log.go:172] (0xc00294e370) Reply frame received for 3 I0511 20:38:44.575727 7 log.go:172] (0xc00294e370) (0xc001ee8820) Create stream I0511 20:38:44.575736 7 log.go:172] (0xc00294e370) (0xc001ee8820) Stream added, broadcasting: 5 I0511 20:38:44.576603 7 log.go:172] (0xc00294e370) Reply frame received for 5 I0511 20:38:44.628509 7 log.go:172] (0xc00294e370) Data frame received for 5 I0511 20:38:44.628532 7 log.go:172] (0xc001ee8820) (5) Data frame handling I0511 20:38:44.628562 7 log.go:172] (0xc00294e370) Data frame received for 3 I0511 20:38:44.628583 7 log.go:172] (0xc001209d60) (3) Data frame handling I0511 20:38:44.628594 7 log.go:172] (0xc001209d60) (3) Data frame sent I0511 20:38:44.628605 7 log.go:172] (0xc00294e370) Data frame received for 3 I0511 20:38:44.628615 7 log.go:172] (0xc001209d60) (3) Data frame handling I0511 20:38:44.634608 7 log.go:172] (0xc00294e370) Data frame received for 1 I0511 20:38:44.634631 7 log.go:172] (0xc001ee8780) (1) Data frame handling I0511 20:38:44.634648 7 log.go:172] (0xc001ee8780) (1) Data frame sent I0511 20:38:44.634669 7 log.go:172] (0xc00294e370) (0xc001ee8780) Stream removed, broadcasting: 1 I0511 20:38:44.634691 7 log.go:172] (0xc00294e370) Go away received I0511 20:38:44.634819 7 log.go:172] (0xc00294e370) (0xc001ee8780) Stream removed, broadcasting: 1 I0511 20:38:44.634839 7 log.go:172] (0xc00294e370) (0xc001209d60) Stream removed, broadcasting: 3 I0511 20:38:44.634857 7 log.go:172] (0xc00294e370) (0xc001ee8820) Stream removed, broadcasting: 5 May 11 20:38:44.634: INFO: Exec stderr: "" May 11 20:38:44.634: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:44.634: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:44.696691 7 log.go:172] (0xc00294eb00) (0xc001ee8aa0) Create stream I0511 20:38:44.696720 7 log.go:172] (0xc00294eb00) (0xc001ee8aa0) Stream added, broadcasting: 1 I0511 20:38:44.698258 7 log.go:172] (0xc00294eb00) Reply frame received for 1 I0511 20:38:44.698287 7 log.go:172] (0xc00294eb00) (0xc001ee8b40) Create stream I0511 20:38:44.698299 7 log.go:172] (0xc00294eb00) (0xc001ee8b40) Stream added, broadcasting: 3 I0511 20:38:44.699560 7 log.go:172] (0xc00294eb00) Reply frame received for 3 I0511 20:38:44.699580 7 log.go:172] (0xc00294eb00) (0xc001209e00) Create stream I0511 20:38:44.699588 7 log.go:172] (0xc00294eb00) (0xc001209e00) Stream added, broadcasting: 5 I0511 20:38:44.700488 7 log.go:172] (0xc00294eb00) Reply frame received for 5 I0511 20:38:44.766531 7 log.go:172] (0xc00294eb00) Data frame received for 5 I0511 20:38:44.766559 7 log.go:172] (0xc001209e00) (5) Data frame handling I0511 20:38:44.766575 7 log.go:172] (0xc00294eb00) Data frame received for 3 I0511 20:38:44.766582 7 log.go:172] (0xc001ee8b40) (3) Data frame handling I0511 20:38:44.766590 7 log.go:172] (0xc001ee8b40) (3) Data frame sent I0511 20:38:44.766599 7 log.go:172] (0xc00294eb00) Data frame received for 3 I0511 20:38:44.766612 7 log.go:172] (0xc001ee8b40) (3) Data frame handling I0511 20:38:44.767495 7 log.go:172] (0xc00294eb00) Data frame received for 1 I0511 20:38:44.767512 7 log.go:172] (0xc001ee8aa0) (1) Data frame handling I0511 20:38:44.767527 7 log.go:172] (0xc001ee8aa0) (1) Data frame sent I0511 20:38:44.767545 7 log.go:172] (0xc00294eb00) (0xc001ee8aa0) Stream removed, broadcasting: 1 I0511 20:38:44.767561 7 log.go:172] (0xc00294eb00) Go away received I0511 20:38:44.767606 7 log.go:172] (0xc00294eb00) (0xc001ee8aa0) Stream removed, broadcasting: 1 I0511 20:38:44.767628 7 log.go:172] (0xc00294eb00) (0xc001ee8b40) Stream removed, broadcasting: 3 I0511 20:38:44.767641 7 log.go:172] (0xc00294eb00) (0xc001209e00) Stream removed, broadcasting: 5 May 11 20:38:44.767: INFO: Exec stderr: "" May 11 20:38:44.767: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:44.767: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:44.789661 7 log.go:172] (0xc00303a370) (0xc001b7c0a0) Create stream I0511 20:38:44.789681 7 log.go:172] (0xc00303a370) (0xc001b7c0a0) Stream added, broadcasting: 1 I0511 20:38:44.790909 7 log.go:172] (0xc00303a370) Reply frame received for 1 I0511 20:38:44.790929 7 log.go:172] (0xc00303a370) (0xc001b7c280) Create stream I0511 20:38:44.790944 7 log.go:172] (0xc00303a370) (0xc001b7c280) Stream added, broadcasting: 3 I0511 20:38:44.791752 7 log.go:172] (0xc00303a370) Reply frame received for 3 I0511 20:38:44.791788 7 log.go:172] (0xc00303a370) (0xc002dce0a0) Create stream I0511 20:38:44.791810 7 log.go:172] (0xc00303a370) (0xc002dce0a0) Stream added, broadcasting: 5 I0511 20:38:44.792450 7 log.go:172] (0xc00303a370) Reply frame received for 5 I0511 20:38:44.886103 7 log.go:172] (0xc00303a370) Data frame received for 5 I0511 20:38:44.886122 7 log.go:172] (0xc002dce0a0) (5) Data frame handling I0511 20:38:44.886146 7 log.go:172] (0xc00303a370) Data frame received for 3 I0511 20:38:44.886167 7 log.go:172] (0xc001b7c280) (3) Data frame handling I0511 20:38:44.886187 7 log.go:172] (0xc001b7c280) (3) Data frame sent I0511 20:38:44.886195 7 log.go:172] (0xc00303a370) Data frame received for 3 I0511 20:38:44.886212 7 log.go:172] (0xc001b7c280) (3) Data frame handling I0511 20:38:44.887058 7 log.go:172] (0xc00303a370) Data frame received for 1 I0511 20:38:44.887081 7 log.go:172] (0xc001b7c0a0) (1) Data frame handling I0511 20:38:44.887096 7 log.go:172] (0xc001b7c0a0) (1) Data frame sent I0511 20:38:44.887113 7 log.go:172] (0xc00303a370) (0xc001b7c0a0) Stream removed, broadcasting: 1 I0511 20:38:44.887130 7 log.go:172] (0xc00303a370) Go away received I0511 20:38:44.887216 7 log.go:172] (0xc00303a370) (0xc001b7c0a0) Stream removed, broadcasting: 1 I0511 20:38:44.887243 7 log.go:172] (0xc00303a370) (0xc001b7c280) Stream removed, broadcasting: 3 I0511 20:38:44.887255 7 log.go:172] (0xc00303a370) (0xc002dce0a0) Stream removed, broadcasting: 5 May 11 20:38:44.887: INFO: Exec stderr: "" May 11 20:38:44.887: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:44.887: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:44.913307 7 log.go:172] (0xc00294f290) (0xc001ee90e0) Create stream I0511 20:38:44.913325 7 log.go:172] (0xc00294f290) (0xc001ee90e0) Stream added, broadcasting: 1 I0511 20:38:44.914457 7 log.go:172] (0xc00294f290) Reply frame received for 1 I0511 20:38:44.914479 7 log.go:172] (0xc00294f290) (0xc002dce140) Create stream I0511 20:38:44.914487 7 log.go:172] (0xc00294f290) (0xc002dce140) Stream added, broadcasting: 3 I0511 20:38:44.915113 7 log.go:172] (0xc00294f290) Reply frame received for 3 I0511 20:38:44.915140 7 log.go:172] (0xc00294f290) (0xc000e29720) Create stream I0511 20:38:44.915152 7 log.go:172] (0xc00294f290) (0xc000e29720) Stream added, broadcasting: 5 I0511 20:38:44.915848 7 log.go:172] (0xc00294f290) Reply frame received for 5 I0511 20:38:44.984166 7 log.go:172] (0xc00294f290) Data frame received for 5 I0511 20:38:44.984205 7 log.go:172] (0xc000e29720) (5) Data frame handling I0511 20:38:44.984234 7 log.go:172] (0xc00294f290) Data frame received for 3 I0511 20:38:44.984252 7 log.go:172] (0xc002dce140) (3) Data frame handling I0511 20:38:44.984268 7 log.go:172] (0xc002dce140) (3) Data frame sent I0511 20:38:44.984284 7 log.go:172] (0xc00294f290) Data frame received for 3 I0511 20:38:44.984294 7 log.go:172] (0xc002dce140) (3) Data frame handling I0511 20:38:44.985349 7 log.go:172] (0xc00294f290) Data frame received for 1 I0511 20:38:44.985364 7 log.go:172] (0xc001ee90e0) (1) Data frame handling I0511 20:38:44.985377 7 log.go:172] (0xc001ee90e0) (1) Data frame sent I0511 20:38:44.985507 7 log.go:172] (0xc00294f290) (0xc001ee90e0) Stream removed, broadcasting: 1 I0511 20:38:44.985576 7 log.go:172] (0xc00294f290) Go away received I0511 20:38:44.985621 7 log.go:172] (0xc00294f290) (0xc001ee90e0) Stream removed, broadcasting: 1 I0511 20:38:44.985660 7 log.go:172] (0xc00294f290) (0xc002dce140) Stream removed, broadcasting: 3 I0511 20:38:44.985680 7 log.go:172] (0xc00294f290) (0xc000e29720) Stream removed, broadcasting: 5 May 11 20:38:44.985: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 11 20:38:44.985: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:44.985: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:45.008380 7 log.go:172] (0xc002ea6000) (0xc000e29c20) Create stream I0511 20:38:45.008401 7 log.go:172] (0xc002ea6000) (0xc000e29c20) Stream added, broadcasting: 1 I0511 20:38:45.010031 7 log.go:172] (0xc002ea6000) Reply frame received for 1 I0511 20:38:45.010057 7 log.go:172] (0xc002ea6000) (0xc001ee9360) Create stream I0511 20:38:45.010066 7 log.go:172] (0xc002ea6000) (0xc001ee9360) Stream added, broadcasting: 3 I0511 20:38:45.010752 7 log.go:172] (0xc002ea6000) Reply frame received for 3 I0511 20:38:45.010779 7 log.go:172] (0xc002ea6000) (0xc002dce640) Create stream I0511 20:38:45.010789 7 log.go:172] (0xc002ea6000) (0xc002dce640) Stream added, broadcasting: 5 I0511 20:38:45.011478 7 log.go:172] (0xc002ea6000) Reply frame received for 5 I0511 20:38:45.066400 7 log.go:172] (0xc002ea6000) Data frame received for 3 I0511 20:38:45.066435 7 log.go:172] (0xc001ee9360) (3) Data frame handling I0511 20:38:45.066451 7 log.go:172] (0xc001ee9360) (3) Data frame sent I0511 20:38:45.066459 7 log.go:172] (0xc002ea6000) Data frame received for 3 I0511 20:38:45.066467 7 log.go:172] (0xc001ee9360) (3) Data frame handling I0511 20:38:45.066484 7 log.go:172] (0xc002ea6000) Data frame received for 5 I0511 20:38:45.066491 7 log.go:172] (0xc002dce640) (5) Data frame handling I0511 20:38:45.067386 7 log.go:172] (0xc002ea6000) Data frame received for 1 I0511 20:38:45.067403 7 log.go:172] (0xc000e29c20) (1) Data frame handling I0511 20:38:45.067414 7 log.go:172] (0xc000e29c20) (1) Data frame sent I0511 20:38:45.067433 7 log.go:172] (0xc002ea6000) (0xc000e29c20) Stream removed, broadcasting: 1 I0511 20:38:45.067495 7 log.go:172] (0xc002ea6000) Go away received I0511 20:38:45.067534 7 log.go:172] (0xc002ea6000) (0xc000e29c20) Stream removed, broadcasting: 1 I0511 20:38:45.067549 7 log.go:172] (0xc002ea6000) (0xc001ee9360) Stream removed, broadcasting: 3 I0511 20:38:45.067559 7 log.go:172] (0xc002ea6000) (0xc002dce640) Stream removed, broadcasting: 5 May 11 20:38:45.067: INFO: Exec stderr: "" May 11 20:38:45.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:45.067: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:45.130520 7 log.go:172] (0xc005d70420) (0xc002dcebe0) Create stream I0511 20:38:45.130556 7 log.go:172] (0xc005d70420) (0xc002dcebe0) Stream added, broadcasting: 1 I0511 20:38:45.132097 7 log.go:172] (0xc005d70420) Reply frame received for 1 I0511 20:38:45.132133 7 log.go:172] (0xc005d70420) (0xc001b7c500) Create stream I0511 20:38:45.132158 7 log.go:172] (0xc005d70420) (0xc001b7c500) Stream added, broadcasting: 3 I0511 20:38:45.132971 7 log.go:172] (0xc005d70420) Reply frame received for 3 I0511 20:38:45.133004 7 log.go:172] (0xc005d70420) (0xc0019d4280) Create stream I0511 20:38:45.133018 7 log.go:172] (0xc005d70420) (0xc0019d4280) Stream added, broadcasting: 5 I0511 20:38:45.133906 7 log.go:172] (0xc005d70420) Reply frame received for 5 I0511 20:38:45.184962 7 log.go:172] (0xc005d70420) Data frame received for 5 I0511 20:38:45.184997 7 log.go:172] (0xc0019d4280) (5) Data frame handling I0511 20:38:45.185015 7 log.go:172] (0xc005d70420) Data frame received for 3 I0511 20:38:45.185024 7 log.go:172] (0xc001b7c500) (3) Data frame handling I0511 20:38:45.185034 7 log.go:172] (0xc001b7c500) (3) Data frame sent I0511 20:38:45.185045 7 log.go:172] (0xc005d70420) Data frame received for 3 I0511 20:38:45.185057 7 log.go:172] (0xc001b7c500) (3) Data frame handling I0511 20:38:45.186043 7 log.go:172] (0xc005d70420) Data frame received for 1 I0511 20:38:45.186059 7 log.go:172] (0xc002dcebe0) (1) Data frame handling I0511 20:38:45.186072 7 log.go:172] (0xc002dcebe0) (1) Data frame sent I0511 20:38:45.186086 7 log.go:172] (0xc005d70420) (0xc002dcebe0) Stream removed, broadcasting: 1 I0511 20:38:45.186110 7 log.go:172] (0xc005d70420) Go away received I0511 20:38:45.186145 7 log.go:172] (0xc005d70420) (0xc002dcebe0) Stream removed, broadcasting: 1 I0511 20:38:45.186165 7 log.go:172] (0xc005d70420) (0xc001b7c500) Stream removed, broadcasting: 3 I0511 20:38:45.186210 7 log.go:172] (0xc005d70420) (0xc0019d4280) Stream removed, broadcasting: 5 May 11 20:38:45.186: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 11 20:38:45.186: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:45.186: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:45.215002 7 log.go:172] (0xc002e34370) (0xc0019d4640) Create stream I0511 20:38:45.215028 7 log.go:172] (0xc002e34370) (0xc0019d4640) Stream added, broadcasting: 1 I0511 20:38:45.216753 7 log.go:172] (0xc002e34370) Reply frame received for 1 I0511 20:38:45.216777 7 log.go:172] (0xc002e34370) (0xc000e29d60) Create stream I0511 20:38:45.216783 7 log.go:172] (0xc002e34370) (0xc000e29d60) Stream added, broadcasting: 3 I0511 20:38:45.217964 7 log.go:172] (0xc002e34370) Reply frame received for 3 I0511 20:38:45.218015 7 log.go:172] (0xc002e34370) (0xc0019d48c0) Create stream I0511 20:38:45.218037 7 log.go:172] (0xc002e34370) (0xc0019d48c0) Stream added, broadcasting: 5 I0511 20:38:45.218656 7 log.go:172] (0xc002e34370) Reply frame received for 5 I0511 20:38:45.276739 7 log.go:172] (0xc002e34370) Data frame received for 3 I0511 20:38:45.276762 7 log.go:172] (0xc000e29d60) (3) Data frame handling I0511 20:38:45.276778 7 log.go:172] (0xc000e29d60) (3) Data frame sent I0511 20:38:45.276792 7 log.go:172] (0xc002e34370) Data frame received for 3 I0511 20:38:45.276801 7 log.go:172] (0xc000e29d60) (3) Data frame handling I0511 20:38:45.276826 7 log.go:172] (0xc002e34370) Data frame received for 5 I0511 20:38:45.276875 7 log.go:172] (0xc0019d48c0) (5) Data frame handling I0511 20:38:45.278225 7 log.go:172] (0xc002e34370) Data frame received for 1 I0511 20:38:45.278251 7 log.go:172] (0xc0019d4640) (1) Data frame handling I0511 20:38:45.278262 7 log.go:172] (0xc0019d4640) (1) Data frame sent I0511 20:38:45.278272 7 log.go:172] (0xc002e34370) (0xc0019d4640) Stream removed, broadcasting: 1 I0511 20:38:45.278294 7 log.go:172] (0xc002e34370) Go away received I0511 20:38:45.278334 7 log.go:172] (0xc002e34370) (0xc0019d4640) Stream removed, broadcasting: 1 I0511 20:38:45.278346 7 log.go:172] (0xc002e34370) (0xc000e29d60) Stream removed, broadcasting: 3 I0511 20:38:45.278353 7 log.go:172] (0xc002e34370) (0xc0019d48c0) Stream removed, broadcasting: 5 May 11 20:38:45.278: INFO: Exec stderr: "" May 11 20:38:45.278: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:45.278: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:45.303409 7 log.go:172] (0xc00303abb0) (0xc001b7ce60) Create stream I0511 20:38:45.303428 7 log.go:172] (0xc00303abb0) (0xc001b7ce60) Stream added, broadcasting: 1 I0511 20:38:45.305763 7 log.go:172] (0xc00303abb0) Reply frame received for 1 I0511 20:38:45.305789 7 log.go:172] (0xc00303abb0) (0xc0019d4b40) Create stream I0511 20:38:45.305800 7 log.go:172] (0xc00303abb0) (0xc0019d4b40) Stream added, broadcasting: 3 I0511 20:38:45.307559 7 log.go:172] (0xc00303abb0) Reply frame received for 3 I0511 20:38:45.307628 7 log.go:172] (0xc00303abb0) (0xc001ee9680) Create stream I0511 20:38:45.307667 7 log.go:172] (0xc00303abb0) (0xc001ee9680) Stream added, broadcasting: 5 I0511 20:38:45.309476 7 log.go:172] (0xc00303abb0) Reply frame received for 5 I0511 20:38:45.378654 7 log.go:172] (0xc00303abb0) Data frame received for 3 I0511 20:38:45.378670 7 log.go:172] (0xc0019d4b40) (3) Data frame handling I0511 20:38:45.378677 7 log.go:172] (0xc0019d4b40) (3) Data frame sent I0511 20:38:45.378681 7 log.go:172] (0xc00303abb0) Data frame received for 3 I0511 20:38:45.378685 7 log.go:172] (0xc0019d4b40) (3) Data frame handling I0511 20:38:45.378704 7 log.go:172] (0xc00303abb0) Data frame received for 5 I0511 20:38:45.378719 7 log.go:172] (0xc001ee9680) (5) Data frame handling I0511 20:38:45.379814 7 log.go:172] (0xc00303abb0) Data frame received for 1 I0511 20:38:45.379832 7 log.go:172] (0xc001b7ce60) (1) Data frame handling I0511 20:38:45.379840 7 log.go:172] (0xc001b7ce60) (1) Data frame sent I0511 20:38:45.379851 7 log.go:172] (0xc00303abb0) (0xc001b7ce60) Stream removed, broadcasting: 1 I0511 20:38:45.379925 7 log.go:172] (0xc00303abb0) (0xc001b7ce60) Stream removed, broadcasting: 1 I0511 20:38:45.379934 7 log.go:172] (0xc00303abb0) (0xc0019d4b40) Stream removed, broadcasting: 3 I0511 20:38:45.379969 7 log.go:172] (0xc00303abb0) Go away received I0511 20:38:45.380042 7 log.go:172] (0xc00303abb0) (0xc001ee9680) Stream removed, broadcasting: 5 May 11 20:38:45.380: INFO: Exec stderr: "" May 11 20:38:45.380: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:45.380: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:45.408003 7 log.go:172] (0xc00303b1e0) (0xc001b7d040) Create stream I0511 20:38:45.408039 7 log.go:172] (0xc00303b1e0) (0xc001b7d040) Stream added, broadcasting: 1 I0511 20:38:45.409479 7 log.go:172] (0xc00303b1e0) Reply frame received for 1 I0511 20:38:45.409522 7 log.go:172] (0xc00303b1e0) (0xc001ee97c0) Create stream I0511 20:38:45.409539 7 log.go:172] (0xc00303b1e0) (0xc001ee97c0) Stream added, broadcasting: 3 I0511 20:38:45.410335 7 log.go:172] (0xc00303b1e0) Reply frame received for 3 I0511 20:38:45.410376 7 log.go:172] (0xc00303b1e0) (0xc001ee9860) Create stream I0511 20:38:45.410393 7 log.go:172] (0xc00303b1e0) (0xc001ee9860) Stream added, broadcasting: 5 I0511 20:38:45.411125 7 log.go:172] (0xc00303b1e0) Reply frame received for 5 I0511 20:38:45.463347 7 log.go:172] (0xc00303b1e0) Data frame received for 3 I0511 20:38:45.463372 7 log.go:172] (0xc001ee97c0) (3) Data frame handling I0511 20:38:45.463378 7 log.go:172] (0xc001ee97c0) (3) Data frame sent I0511 20:38:45.463383 7 log.go:172] (0xc00303b1e0) Data frame received for 3 I0511 20:38:45.463394 7 log.go:172] (0xc001ee97c0) (3) Data frame handling I0511 20:38:45.463413 7 log.go:172] (0xc00303b1e0) Data frame received for 5 I0511 20:38:45.463436 7 log.go:172] (0xc001ee9860) (5) Data frame handling I0511 20:38:45.464552 7 log.go:172] (0xc00303b1e0) Data frame received for 1 I0511 20:38:45.464569 7 log.go:172] (0xc001b7d040) (1) Data frame handling I0511 20:38:45.464588 7 log.go:172] (0xc001b7d040) (1) Data frame sent I0511 20:38:45.464690 7 log.go:172] (0xc00303b1e0) (0xc001b7d040) Stream removed, broadcasting: 1 I0511 20:38:45.464834 7 log.go:172] (0xc00303b1e0) (0xc001b7d040) Stream removed, broadcasting: 1 I0511 20:38:45.464863 7 log.go:172] (0xc00303b1e0) (0xc001ee97c0) Stream removed, broadcasting: 3 I0511 20:38:45.464969 7 log.go:172] (0xc00303b1e0) Go away received I0511 20:38:45.465319 7 log.go:172] (0xc00303b1e0) (0xc001ee9860) Stream removed, broadcasting: 5 May 11 20:38:45.465: INFO: Exec stderr: "" May 11 20:38:45.465: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6724 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:38:45.465: INFO: >>> kubeConfig: /root/.kube/config I0511 20:38:45.494903 7 log.go:172] (0xc00294fad0) (0xc001ee9b80) Create stream I0511 20:38:45.494926 7 log.go:172] (0xc00294fad0) (0xc001ee9b80) Stream added, broadcasting: 1 I0511 20:38:45.496933 7 log.go:172] (0xc00294fad0) Reply frame received for 1 I0511 20:38:45.496962 7 log.go:172] (0xc00294fad0) (0xc001ee9cc0) Create stream I0511 20:38:45.496988 7 log.go:172] (0xc00294fad0) (0xc001ee9cc0) Stream added, broadcasting: 3 I0511 20:38:45.498210 7 log.go:172] (0xc00294fad0) Reply frame received for 3 I0511 20:38:45.498267 7 log.go:172] (0xc00294fad0) (0xc002dced20) Create stream I0511 20:38:45.498290 7 log.go:172] (0xc00294fad0) (0xc002dced20) Stream added, broadcasting: 5 I0511 20:38:45.499075 7 log.go:172] (0xc00294fad0) Reply frame received for 5 I0511 20:38:45.564291 7 log.go:172] (0xc00294fad0) Data frame received for 5 I0511 20:38:45.564328 7 log.go:172] (0xc002dced20) (5) Data frame handling I0511 20:38:45.564358 7 log.go:172] (0xc00294fad0) Data frame received for 3 I0511 20:38:45.564379 7 log.go:172] (0xc001ee9cc0) (3) Data frame handling I0511 20:38:45.564395 7 log.go:172] (0xc001ee9cc0) (3) Data frame sent I0511 20:38:45.564404 7 log.go:172] (0xc00294fad0) Data frame received for 3 I0511 20:38:45.564420 7 log.go:172] (0xc001ee9cc0) (3) Data frame handling I0511 20:38:45.565273 7 log.go:172] (0xc00294fad0) Data frame received for 1 I0511 20:38:45.565294 7 log.go:172] (0xc001ee9b80) (1) Data frame handling I0511 20:38:45.565309 7 log.go:172] (0xc001ee9b80) (1) Data frame sent I0511 20:38:45.565320 7 log.go:172] (0xc00294fad0) (0xc001ee9b80) Stream removed, broadcasting: 1 I0511 20:38:45.565366 7 log.go:172] (0xc00294fad0) Go away received I0511 20:38:45.565395 7 log.go:172] (0xc00294fad0) (0xc001ee9b80) Stream removed, broadcasting: 1 I0511 20:38:45.565412 7 log.go:172] (0xc00294fad0) (0xc001ee9cc0) Stream removed, broadcasting: 3 I0511 20:38:45.565420 7 log.go:172] (0xc00294fad0) (0xc002dced20) Stream removed, broadcasting: 5 May 11 20:38:45.565: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:38:45.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6724" for this suite. • [SLOW TEST:20.904 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4653,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:38:45.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:38:48.366: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:38:50.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:38:52.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:38:54.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826328, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:38:57.907: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 11 20:38:58.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3452-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:39:00.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9470" for this suite. STEP: Destroying namespace "webhook-9470-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.030 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":278,"skipped":4657,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:39:00.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 20:39:01.208: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1" in namespace "downward-api-8387" to be "Succeeded or Failed" May 11 20:39:01.456: INFO: Pod "downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1": Phase="Pending", Reason="", readiness=false. Elapsed: 248.310619ms May 11 20:39:03.576: INFO: Pod "downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368660925s May 11 20:39:05.582: INFO: Pod "downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37469225s May 11 20:39:07.654: INFO: Pod "downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1": Phase="Running", Reason="", readiness=true. Elapsed: 6.445902571s May 11 20:39:09.657: INFO: Pod "downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.448992766s STEP: Saw pod success May 11 20:39:09.657: INFO: Pod "downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1" satisfied condition "Succeeded or Failed" May 11 20:39:09.659: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1 container client-container: STEP: delete the pod May 11 20:39:09.719: INFO: Waiting for pod downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1 to disappear May 11 20:39:09.731: INFO: Pod downwardapi-volume-2af85d93-185c-4be6-9bbd-3f294aed58a1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:39:09.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8387" for this suite. • [SLOW TEST:9.150 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4665,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:39:09.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 11 20:39:10.075: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-a 32edb693-b00d-459b-b1ce-bb6dc7281d88 3567219 0 2020-05-11 20:39:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:39:10.075: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-a 32edb693-b00d-459b-b1ce-bb6dc7281d88 3567219 0 2020-05-11 20:39:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 11 20:39:20.081: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-a 32edb693-b00d-459b-b1ce-bb6dc7281d88 3567260 0 2020-05-11 20:39:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:39:20.081: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-a 32edb693-b00d-459b-b1ce-bb6dc7281d88 3567260 0 2020-05-11 20:39:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 11 20:39:30.093: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-a 32edb693-b00d-459b-b1ce-bb6dc7281d88 3567301 0 2020-05-11 20:39:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:39:30.093: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-a 32edb693-b00d-459b-b1ce-bb6dc7281d88 3567301 0 2020-05-11 20:39:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 11 20:39:40.100: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-a 32edb693-b00d-459b-b1ce-bb6dc7281d88 3567341 0 2020-05-11 20:39:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:39:40.101: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-a 32edb693-b00d-459b-b1ce-bb6dc7281d88 3567341 0 2020-05-11 20:39:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 11 20:39:50.107: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-b daaa8eb5-7637-49b9-8422-4b8492b6e235 3567372 0 2020-05-11 20:39:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:39:50.107: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-b daaa8eb5-7637-49b9-8422-4b8492b6e235 3567372 0 2020-05-11 20:39:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 11 20:40:00.114: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-b daaa8eb5-7637-49b9-8422-4b8492b6e235 3567412 0 2020-05-11 20:39:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 11 20:40:00.114: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2506 /api/v1/namespaces/watch-2506/configmaps/e2e-watch-test-configmap-b daaa8eb5-7637-49b9-8422-4b8492b6e235 3567412 0 2020-05-11 20:39:50 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-11 20:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:40:10.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2506" for this suite. • [SLOW TEST:61.047 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":280,"skipped":4670,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:40:10.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6928 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6928 STEP: Creating statefulset with conflicting port in namespace statefulset-6928 STEP: Waiting until pod test-pod will start running in namespace statefulset-6928 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6928 May 11 20:40:24.481: INFO: Observed stateful pod in namespace: statefulset-6928, name: ss-0, uid: 97552ba2-5fc3-4868-9ffb-8f2eced2a1fd, status phase: Pending. Waiting for statefulset controller to delete. May 11 20:40:24.924: INFO: Observed stateful pod in namespace: statefulset-6928, name: ss-0, uid: 97552ba2-5fc3-4868-9ffb-8f2eced2a1fd, status phase: Failed. Waiting for statefulset controller to delete. May 11 20:40:24.932: INFO: Observed stateful pod in namespace: statefulset-6928, name: ss-0, uid: 97552ba2-5fc3-4868-9ffb-8f2eced2a1fd, status phase: Failed. Waiting for statefulset controller to delete. May 11 20:40:24.954: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6928 STEP: Removing pod with conflicting port in namespace statefulset-6928 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6928 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 11 20:40:31.230: INFO: Deleting all statefulset in ns statefulset-6928 May 11 20:40:31.233: INFO: Scaling statefulset ss to 0 May 11 20:40:51.377: INFO: Waiting for statefulset status.replicas updated to 0 May 11 20:40:51.380: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:40:51.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6928" for this suite. • [SLOW TEST:40.960 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":281,"skipped":4673,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:40:51.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-da2280c6-bbd9-491c-9b8a-7874a3f157c7 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:41:00.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-327" for this suite. • [SLOW TEST:8.456 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4680,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:41:00.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 20:41:04.491: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:41:04.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5114" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":283,"skipped":4694,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:41:04.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-e52dbc1b-7275-46e4-806d-85c82b0a73ed STEP: Creating a pod to test consume secrets May 11 20:41:04.831: INFO: Waiting up to 5m0s for pod "pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b" in namespace "secrets-3265" to be "Succeeded or Failed" May 11 20:41:04.867: INFO: Pod "pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.885034ms May 11 20:41:06.872: INFO: Pod "pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041667734s May 11 20:41:08.984: INFO: Pod "pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153880275s May 11 20:41:11.121: INFO: Pod "pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.290814113s STEP: Saw pod success May 11 20:41:11.121: INFO: Pod "pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b" satisfied condition "Succeeded or Failed" May 11 20:41:11.124: INFO: Trying to get logs from node latest-worker pod pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b container secret-volume-test: STEP: delete the pod May 11 20:41:11.352: INFO: Waiting for pod pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b to disappear May 11 20:41:11.369: INFO: Pod pod-secrets-d9fadadc-8a33-4f24-966e-721d446a874b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:41:11.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3265" for this suite. • [SLOW TEST:6.775 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4702,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:41:11.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-rch9 STEP: Creating a pod to test atomic-volume-subpath May 11 20:41:12.290: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rch9" in namespace "subpath-8210" to be "Succeeded or Failed" May 11 20:41:12.434: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Pending", Reason="", readiness=false. Elapsed: 144.734727ms May 11 20:41:14.691: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401776017s May 11 20:41:16.694: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404891906s May 11 20:41:18.699: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 6.409151488s May 11 20:41:20.703: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 8.413163451s May 11 20:41:22.727: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 10.437047956s May 11 20:41:24.729: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 12.439934238s May 11 20:41:26.734: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 14.443969084s May 11 20:41:28.769: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 16.479629445s May 11 20:41:30.774: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 18.484374657s May 11 20:41:32.777: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 20.48781727s May 11 20:41:34.780: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 22.490196027s May 11 20:41:36.879: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Running", Reason="", readiness=true. Elapsed: 24.589891082s May 11 20:41:38.884: INFO: Pod "pod-subpath-test-configmap-rch9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.59427721s STEP: Saw pod success May 11 20:41:38.884: INFO: Pod "pod-subpath-test-configmap-rch9" satisfied condition "Succeeded or Failed" May 11 20:41:38.887: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-rch9 container test-container-subpath-configmap-rch9: STEP: delete the pod May 11 20:41:39.264: INFO: Waiting for pod pod-subpath-test-configmap-rch9 to disappear May 11 20:41:39.310: INFO: Pod pod-subpath-test-configmap-rch9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-rch9 May 11 20:41:39.310: INFO: Deleting pod "pod-subpath-test-configmap-rch9" in namespace "subpath-8210" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:41:39.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8210" for this suite. • [SLOW TEST:27.942 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":285,"skipped":4709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:41:39.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 11 20:41:39.933: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:41:50.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2562" for this suite. • [SLOW TEST:11.502 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":286,"skipped":4738,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:41:50.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 20:41:50.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1534' May 11 20:41:58.109: INFO: stderr: "" May 11 20:41:58.109: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 11 20:42:03.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1534 -o json' May 11 20:42:03.261: INFO: stderr: "" May 11 20:42:03.261: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-11T20:41:58Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-11T20:41:57Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.163\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-11T20:42:02Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1534\",\n \"resourceVersion\": \"3568136\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1534/pods/e2e-test-httpd-pod\",\n \"uid\": \"cfc3c509-95d5-4fc7-bdaa-26bb1008abd8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-cjgtn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-cjgtn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-cjgtn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T20:41:59Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T20:42:02Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T20:42:02Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T20:41:58Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://27ce60dc3db15d0d8c15a9fd4aec35eb2da384a6a08582460594dfa670a0d291\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-11T20:42:02Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.163\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.163\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-11T20:41:59Z\"\n }\n}\n" STEP: replace the image in the pod May 11 20:42:03.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1534' May 11 20:42:03.640: INFO: stderr: "" May 11 20:42:03.640: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 11 20:42:03.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1534' May 11 20:42:07.992: INFO: stderr: "" May 11 20:42:07.993: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:42:07.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1534" for this suite. • [SLOW TEST:17.180 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":287,"skipped":4760,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 11 20:42:08.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 11 20:42:08.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522" in namespace "projected-7026" to be "Succeeded or Failed" May 11 20:42:08.068: INFO: Pod "downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522": Phase="Pending", Reason="", readiness=false. Elapsed: 16.183273ms May 11 20:42:10.071: INFO: Pod "downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019569421s May 11 20:42:12.347: INFO: Pod "downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522": Phase="Running", Reason="", readiness=true. Elapsed: 4.295795393s May 11 20:42:14.468: INFO: Pod "downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.416297679s STEP: Saw pod success May 11 20:42:14.468: INFO: Pod "downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522" satisfied condition "Succeeded or Failed" May 11 20:42:14.472: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522 container client-container: STEP: delete the pod May 11 20:42:14.974: INFO: Waiting for pod downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522 to disappear May 11 20:42:14.996: INFO: Pod downwardapi-volume-2ebfa412-1382-40b6-9485-5b066a357522 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 11 20:42:14.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7026" for this suite. • [SLOW TEST:7.002 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4770,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 11 20:42:15.004: INFO: Running AfterSuite actions on all nodes May 11 20:42:15.004: INFO: Running AfterSuite actions on node 1 May 11 20:42:15.004: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 7241.778 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS