I0410 21:06:50.804107 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0410 21:06:50.804492 7 e2e.go:109] Starting e2e run "876ca676-7ff4-4a52-a92f-2d64cfb906bd" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586552809 - Will randomize all specs Will run 278 of 4842 specs Apr 10 21:06:50.868: INFO: >>> kubeConfig: /root/.kube/config Apr 10 21:06:50.873: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 10 21:06:50.902: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 10 21:06:50.934: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 10 21:06:50.934: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 10 21:06:50.934: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 10 21:06:50.949: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 10 21:06:50.949: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 10 21:06:50.949: INFO: e2e test version: v1.17.4 Apr 10 21:06:50.951: INFO: kube-apiserver version: v1.17.2 Apr 10 21:06:50.951: INFO: >>> kubeConfig: /root/.kube/config Apr 10 21:06:50.956: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:06:50.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 10 21:06:51.061: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 10 21:06:55.604: INFO: Successfully updated pod "pod-update-activedeadlineseconds-13823d22-2d13-4254-baf6-814b197c6b77" Apr 10 21:06:55.604: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-13823d22-2d13-4254-baf6-814b197c6b77" in namespace "pods-9221" to be "terminated due to deadline exceeded" Apr 10 21:06:55.610: INFO: Pod "pod-update-activedeadlineseconds-13823d22-2d13-4254-baf6-814b197c6b77": Phase="Running", Reason="", readiness=true. Elapsed: 5.984499ms Apr 10 21:06:57.614: INFO: Pod "pod-update-activedeadlineseconds-13823d22-2d13-4254-baf6-814b197c6b77": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009651451s Apr 10 21:06:57.614: INFO: Pod "pod-update-activedeadlineseconds-13823d22-2d13-4254-baf6-814b197c6b77" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:06:57.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9221" for this suite. • [SLOW TEST:6.666 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:06:57.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:07:02.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5448" for this suite. • [SLOW TEST:5.222 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":2,"skipped":33,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:07:02.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-3df2bd60-3a5a-4781-a34b-838de815c28c STEP: Creating secret with name s-test-opt-upd-ff85942f-7184-4056-9c7c-4aa1c6b3caf6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3df2bd60-3a5a-4781-a34b-838de815c28c STEP: Updating secret s-test-opt-upd-ff85942f-7184-4056-9c7c-4aa1c6b3caf6 STEP: Creating secret with name s-test-opt-create-5118a2ea-23ac-4379-90d9-b6bf0070f0cd STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:08:33.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2709" for this suite. • [SLOW TEST:90.611 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":38,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:08:33.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:08:33.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7d547d3-53fc-44e2-8942-77967bc8d77e" in namespace "downward-api-5409" to be "success or failure" Apr 10 21:08:33.549: INFO: Pod "downwardapi-volume-c7d547d3-53fc-44e2-8942-77967bc8d77e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492462ms Apr 10 21:08:35.553: INFO: Pod "downwardapi-volume-c7d547d3-53fc-44e2-8942-77967bc8d77e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006172052s Apr 10 21:08:37.557: INFO: Pod "downwardapi-volume-c7d547d3-53fc-44e2-8942-77967bc8d77e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010413189s STEP: Saw pod success Apr 10 21:08:37.557: INFO: Pod "downwardapi-volume-c7d547d3-53fc-44e2-8942-77967bc8d77e" satisfied condition "success or failure" Apr 10 21:08:37.561: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c7d547d3-53fc-44e2-8942-77967bc8d77e container client-container: STEP: delete the pod Apr 10 21:08:37.633: INFO: Waiting for pod downwardapi-volume-c7d547d3-53fc-44e2-8942-77967bc8d77e to disappear Apr 10 21:08:37.645: INFO: Pod downwardapi-volume-c7d547d3-53fc-44e2-8942-77967bc8d77e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:08:37.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5409" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":40,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:08:37.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b654565d-d77c-41e8-9a1c-f4a60dd68054 STEP: Creating a pod to test consume secrets Apr 10 21:08:37.731: INFO: Waiting up to 5m0s for pod "pod-secrets-b0d66ccf-bb99-40fb-90e2-2eaebaa443c7" in namespace "secrets-1939" to be "success or failure" Apr 10 21:08:37.745: INFO: Pod "pod-secrets-b0d66ccf-bb99-40fb-90e2-2eaebaa443c7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.207354ms Apr 10 21:08:39.749: INFO: Pod "pod-secrets-b0d66ccf-bb99-40fb-90e2-2eaebaa443c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01825636s Apr 10 21:08:41.752: INFO: Pod "pod-secrets-b0d66ccf-bb99-40fb-90e2-2eaebaa443c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021340317s STEP: Saw pod success Apr 10 21:08:41.753: INFO: Pod "pod-secrets-b0d66ccf-bb99-40fb-90e2-2eaebaa443c7" satisfied condition "success or failure" Apr 10 21:08:41.755: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b0d66ccf-bb99-40fb-90e2-2eaebaa443c7 container secret-volume-test: STEP: delete the pod Apr 10 21:08:41.772: INFO: Waiting for pod pod-secrets-b0d66ccf-bb99-40fb-90e2-2eaebaa443c7 to disappear Apr 10 21:08:41.777: INFO: Pod pod-secrets-b0d66ccf-bb99-40fb-90e2-2eaebaa443c7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:08:41.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1939" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":42,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:08:41.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:08:42.412: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:08:44.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149722, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149722, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149722, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149722, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:08:47.456: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:08:47.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1808" for this suite. STEP: Destroying namespace "webhook-1808-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.898 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":6,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:08:47.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 10 21:08:53.764: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9842 PodName:pod-sharedvolume-a940c478-4677-4edc-8a77-c6a8ea95c48a ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:08:53.764: INFO: >>> kubeConfig: /root/.kube/config I0410 21:08:53.798432 7 log.go:172] (0xc0027fbad0) (0xc002378820) Create stream I0410 21:08:53.798471 7 log.go:172] (0xc0027fbad0) (0xc002378820) Stream added, broadcasting: 1 I0410 21:08:53.801506 7 log.go:172] (0xc0027fbad0) Reply frame received for 1 I0410 21:08:53.801551 7 log.go:172] (0xc0027fbad0) (0xc002734280) Create stream I0410 21:08:53.801569 7 log.go:172] (0xc0027fbad0) (0xc002734280) Stream added, broadcasting: 3 I0410 21:08:53.802539 7 log.go:172] (0xc0027fbad0) Reply frame received for 3 I0410 21:08:53.802574 7 log.go:172] (0xc0027fbad0) (0xc0027a8000) Create stream I0410 21:08:53.802587 7 log.go:172] (0xc0027fbad0) (0xc0027a8000) Stream added, broadcasting: 5 I0410 21:08:53.804041 7 log.go:172] (0xc0027fbad0) Reply frame received for 5 I0410 21:08:53.864246 7 log.go:172] (0xc0027fbad0) Data frame received for 3 I0410 21:08:53.864274 7 log.go:172] (0xc002734280) (3) Data frame handling I0410 21:08:53.864283 7 log.go:172] (0xc002734280) (3) Data frame sent I0410 21:08:53.864290 7 log.go:172] (0xc0027fbad0) Data frame received for 3 I0410 21:08:53.864296 7 log.go:172] (0xc002734280) (3) Data frame handling I0410 21:08:53.864321 7 log.go:172] (0xc0027fbad0) Data frame received for 5 I0410 21:08:53.864334 7 log.go:172] (0xc0027a8000) (5) Data frame handling I0410 21:08:53.865614 7 log.go:172] (0xc0027fbad0) Data frame received for 1 I0410 21:08:53.865638 7 log.go:172] (0xc002378820) (1) Data frame handling I0410 21:08:53.865649 7 log.go:172] (0xc002378820) (1) Data frame sent I0410 21:08:53.865660 7 log.go:172] (0xc0027fbad0) (0xc002378820) Stream removed, broadcasting: 1 I0410 21:08:53.865670 7 log.go:172] (0xc0027fbad0) Go away received I0410 21:08:53.866129 7 log.go:172] (0xc0027fbad0) (0xc002378820) Stream removed, broadcasting: 1 I0410 21:08:53.866153 7 log.go:172] (0xc0027fbad0) (0xc002734280) Stream removed, broadcasting: 3 I0410 21:08:53.866167 7 log.go:172] (0xc0027fbad0) (0xc0027a8000) Stream removed, broadcasting: 5 Apr 10 21:08:53.866: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:08:53.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9842" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":7,"skipped":108,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:08:53.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 10 21:08:53.932: INFO: Waiting up to 5m0s for pod "pod-fb727200-8141-4ab6-a441-c85a80d77d2c" in namespace "emptydir-2557" to be "success or failure" Apr 10 21:08:53.940: INFO: Pod "pod-fb727200-8141-4ab6-a441-c85a80d77d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.698617ms Apr 10 21:08:55.944: INFO: Pod "pod-fb727200-8141-4ab6-a441-c85a80d77d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0115362s Apr 10 21:08:57.947: INFO: Pod "pod-fb727200-8141-4ab6-a441-c85a80d77d2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014446682s STEP: Saw pod success Apr 10 21:08:57.947: INFO: Pod "pod-fb727200-8141-4ab6-a441-c85a80d77d2c" satisfied condition "success or failure" Apr 10 21:08:57.950: INFO: Trying to get logs from node jerma-worker pod pod-fb727200-8141-4ab6-a441-c85a80d77d2c container test-container: STEP: delete the pod Apr 10 21:08:58.010: INFO: Waiting for pod pod-fb727200-8141-4ab6-a441-c85a80d77d2c to disappear Apr 10 21:08:58.014: INFO: Pod pod-fb727200-8141-4ab6-a441-c85a80d77d2c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:08:58.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2557" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":120,"failed":0} SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:08:58.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:08:58.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2127" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":9,"skipped":122,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:08:58.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:09:58.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-667" for this suite. • [SLOW TEST:60.103 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:09:58.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 10 21:09:58.275: INFO: Waiting up to 5m0s for pod "pod-d8c57733-8ce2-4a6c-b77d-6589e62e18d3" in namespace "emptydir-5289" to be "success or failure" Apr 10 21:09:58.279: INFO: Pod "pod-d8c57733-8ce2-4a6c-b77d-6589e62e18d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.825785ms Apr 10 21:10:00.288: INFO: Pod "pod-d8c57733-8ce2-4a6c-b77d-6589e62e18d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013219261s Apr 10 21:10:02.293: INFO: Pod "pod-d8c57733-8ce2-4a6c-b77d-6589e62e18d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017570153s STEP: Saw pod success Apr 10 21:10:02.293: INFO: Pod "pod-d8c57733-8ce2-4a6c-b77d-6589e62e18d3" satisfied condition "success or failure" Apr 10 21:10:02.297: INFO: Trying to get logs from node jerma-worker pod pod-d8c57733-8ce2-4a6c-b77d-6589e62e18d3 container test-container: STEP: delete the pod Apr 10 21:10:02.312: INFO: Waiting for pod pod-d8c57733-8ce2-4a6c-b77d-6589e62e18d3 to disappear Apr 10 21:10:02.317: INFO: Pod pod-d8c57733-8ce2-4a6c-b77d-6589e62e18d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:10:02.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5289" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":151,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:10:02.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:10:02.397: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 10 21:10:02.417: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 10 21:10:07.420: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 10 21:10:07.420: INFO: Creating deployment "test-rolling-update-deployment" Apr 10 21:10:07.452: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 10 21:10:07.459: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 10 21:10:09.468: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 10 21:10:09.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149807, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149807, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149807, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149807, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:10:11.482: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 10 21:10:11.490: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4044 /apis/apps/v1/namespaces/deployment-4044/deployments/test-rolling-update-deployment 48157e4d-5fee-445a-955a-5da715b686b8 7032051 1 2020-04-10 21:10:07 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001efc858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-10 21:10:07 +0000 UTC,LastTransitionTime:2020-04-10 21:10:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-10 21:10:10 +0000 UTC,LastTransitionTime:2020-04-10 21:10:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 10 21:10:11.493: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-4044 /apis/apps/v1/namespaces/deployment-4044/replicasets/test-rolling-update-deployment-67cf4f6444 2dca39b5-5a6e-470d-b688-28bff794e0b0 7032039 1 2020-04-10 21:10:07 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 48157e4d-5fee-445a-955a-5da715b686b8 0xc001efcd67 0xc001efcd68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001efcdd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:10:11.493: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 10 21:10:11.493: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4044 /apis/apps/v1/namespaces/deployment-4044/replicasets/test-rolling-update-controller a664625b-d1ec-4e83-a808-fb25edf404f8 7032049 2 2020-04-10 21:10:02 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 48157e4d-5fee-445a-955a-5da715b686b8 0xc001efcc97 0xc001efcc98}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001efccf8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:10:11.496: INFO: Pod "test-rolling-update-deployment-67cf4f6444-tmh9w" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-tmh9w test-rolling-update-deployment-67cf4f6444- deployment-4044 /api/v1/namespaces/deployment-4044/pods/test-rolling-update-deployment-67cf4f6444-tmh9w 6a34d273-2fe0-4a19-9666-73099738afec 7032038 0 2020-04-10 21:10:07 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 2dca39b5-5a6e-470d-b688-28bff794e0b0 0xc001f32307 0xc001f32308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vb9vq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vb9vq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vb9vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:10:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:10:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.116,StartTime:2020-04-10 21:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:10:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://97055368ce355a0cdea6155cb33b482b6cef9facc4561c946f8b7c10c13bc766,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.116,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:10:11.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4044" for this suite. • [SLOW TEST:9.180 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":12,"skipped":162,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:10:11.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 10 21:10:11.563: INFO: Waiting up to 5m0s for pod "pod-5a00bfb8-b1b4-4590-a6b8-14e01c5fb4e4" in namespace "emptydir-17" to be "success or failure" Apr 10 21:10:11.567: INFO: Pod "pod-5a00bfb8-b1b4-4590-a6b8-14e01c5fb4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.624214ms Apr 10 21:10:13.570: INFO: Pod "pod-5a00bfb8-b1b4-4590-a6b8-14e01c5fb4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006975979s Apr 10 21:10:15.575: INFO: Pod "pod-5a00bfb8-b1b4-4590-a6b8-14e01c5fb4e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011309776s STEP: Saw pod success Apr 10 21:10:15.575: INFO: Pod "pod-5a00bfb8-b1b4-4590-a6b8-14e01c5fb4e4" satisfied condition "success or failure" Apr 10 21:10:15.578: INFO: Trying to get logs from node jerma-worker2 pod pod-5a00bfb8-b1b4-4590-a6b8-14e01c5fb4e4 container test-container: STEP: delete the pod Apr 10 21:10:15.621: INFO: Waiting for pod pod-5a00bfb8-b1b4-4590-a6b8-14e01c5fb4e4 to disappear Apr 10 21:10:15.639: INFO: Pod pod-5a00bfb8-b1b4-4590-a6b8-14e01c5fb4e4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:10:15.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-17" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":171,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:10:15.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:10:15.712: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 10 21:10:18.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3333 create -f -' Apr 10 21:10:20.800: INFO: stderr: "" Apr 10 21:10:20.800: INFO: stdout: "e2e-test-crd-publish-openapi-6347-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 10 21:10:20.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3333 delete e2e-test-crd-publish-openapi-6347-crds test-cr' Apr 10 21:10:20.897: INFO: stderr: "" Apr 10 21:10:20.897: INFO: stdout: "e2e-test-crd-publish-openapi-6347-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 10 21:10:20.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3333 apply -f -' Apr 10 21:10:21.195: INFO: stderr: "" Apr 10 21:10:21.195: INFO: stdout: "e2e-test-crd-publish-openapi-6347-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 10 21:10:21.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3333 delete e2e-test-crd-publish-openapi-6347-crds test-cr' Apr 10 21:10:21.304: INFO: stderr: "" Apr 10 21:10:21.304: INFO: stdout: "e2e-test-crd-publish-openapi-6347-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 10 21:10:21.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6347-crds' Apr 10 21:10:21.559: INFO: stderr: "" Apr 10 21:10:21.559: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6347-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:10:24.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3333" for this suite. • [SLOW TEST:8.830 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":14,"skipped":171,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:10:24.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7117 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 10 21:10:24.540: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 10 21:10:44.645: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.19 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7117 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:10:44.645: INFO: >>> kubeConfig: /root/.kube/config I0410 21:10:44.680034 7 log.go:172] (0xc00513e000) (0xc002379a40) Create stream I0410 21:10:44.680062 7 log.go:172] (0xc00513e000) (0xc002379a40) Stream added, broadcasting: 1 I0410 21:10:44.682135 7 log.go:172] (0xc00513e000) Reply frame received for 1 I0410 21:10:44.682203 7 log.go:172] (0xc00513e000) (0xc002379ae0) Create stream I0410 21:10:44.682222 7 log.go:172] (0xc00513e000) (0xc002379ae0) Stream added, broadcasting: 3 I0410 21:10:44.683255 7 log.go:172] (0xc00513e000) Reply frame received for 3 I0410 21:10:44.683297 7 log.go:172] (0xc00513e000) (0xc0027343c0) Create stream I0410 21:10:44.683313 7 log.go:172] (0xc00513e000) (0xc0027343c0) Stream added, broadcasting: 5 I0410 21:10:44.684379 7 log.go:172] (0xc00513e000) Reply frame received for 5 I0410 21:10:45.781625 7 log.go:172] (0xc00513e000) Data frame received for 3 I0410 21:10:45.781667 7 log.go:172] (0xc002379ae0) (3) Data frame handling I0410 21:10:45.781704 7 log.go:172] (0xc002379ae0) (3) Data frame sent I0410 21:10:45.781729 7 log.go:172] (0xc00513e000) Data frame received for 3 I0410 21:10:45.781747 7 log.go:172] (0xc002379ae0) (3) Data frame handling I0410 21:10:45.782062 7 log.go:172] (0xc00513e000) Data frame received for 5 I0410 21:10:45.782104 7 log.go:172] (0xc0027343c0) (5) Data frame handling I0410 21:10:45.784220 7 log.go:172] (0xc00513e000) Data frame received for 1 I0410 21:10:45.784254 7 log.go:172] (0xc002379a40) (1) Data frame handling I0410 21:10:45.784273 7 log.go:172] (0xc002379a40) (1) Data frame sent I0410 21:10:45.784304 7 log.go:172] (0xc00513e000) (0xc002379a40) Stream removed, broadcasting: 1 I0410 21:10:45.784335 7 log.go:172] (0xc00513e000) Go away received I0410 21:10:45.784502 7 log.go:172] (0xc00513e000) (0xc002379a40) Stream removed, broadcasting: 1 I0410 21:10:45.784533 7 log.go:172] (0xc00513e000) (0xc002379ae0) Stream removed, broadcasting: 3 I0410 21:10:45.784546 7 log.go:172] (0xc00513e000) (0xc0027343c0) Stream removed, broadcasting: 5 Apr 10 21:10:45.784: INFO: Found all expected endpoints: [netserver-0] Apr 10 21:10:45.788: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.118 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7117 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:10:45.788: INFO: >>> kubeConfig: /root/.kube/config I0410 21:10:45.823459 7 log.go:172] (0xc005150370) (0xc0028886e0) Create stream I0410 21:10:45.823482 7 log.go:172] (0xc005150370) (0xc0028886e0) Stream added, broadcasting: 1 I0410 21:10:45.826235 7 log.go:172] (0xc005150370) Reply frame received for 1 I0410 21:10:45.826299 7 log.go:172] (0xc005150370) (0xc002379b80) Create stream I0410 21:10:45.826329 7 log.go:172] (0xc005150370) (0xc002379b80) Stream added, broadcasting: 3 I0410 21:10:45.827316 7 log.go:172] (0xc005150370) Reply frame received for 3 I0410 21:10:45.827372 7 log.go:172] (0xc005150370) (0xc002734460) Create stream I0410 21:10:45.827389 7 log.go:172] (0xc005150370) (0xc002734460) Stream added, broadcasting: 5 I0410 21:10:45.828310 7 log.go:172] (0xc005150370) Reply frame received for 5 I0410 21:10:46.889630 7 log.go:172] (0xc005150370) Data frame received for 3 I0410 21:10:46.889684 7 log.go:172] (0xc002379b80) (3) Data frame handling I0410 21:10:46.889720 7 log.go:172] (0xc002379b80) (3) Data frame sent I0410 21:10:46.889949 7 log.go:172] (0xc005150370) Data frame received for 3 I0410 21:10:46.890001 7 log.go:172] (0xc002379b80) (3) Data frame handling I0410 21:10:46.890034 7 log.go:172] (0xc005150370) Data frame received for 5 I0410 21:10:46.890052 7 log.go:172] (0xc002734460) (5) Data frame handling I0410 21:10:46.891696 7 log.go:172] (0xc005150370) Data frame received for 1 I0410 21:10:46.891713 7 log.go:172] (0xc0028886e0) (1) Data frame handling I0410 21:10:46.891725 7 log.go:172] (0xc0028886e0) (1) Data frame sent I0410 21:10:46.891735 7 log.go:172] (0xc005150370) (0xc0028886e0) Stream removed, broadcasting: 1 I0410 21:10:46.891798 7 log.go:172] (0xc005150370) (0xc0028886e0) Stream removed, broadcasting: 1 I0410 21:10:46.891818 7 log.go:172] (0xc005150370) (0xc002379b80) Stream removed, broadcasting: 3 I0410 21:10:46.891992 7 log.go:172] (0xc005150370) (0xc002734460) Stream removed, broadcasting: 5 Apr 10 21:10:46.892: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:10:46.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0410 21:10:46.892368 7 log.go:172] (0xc005150370) Go away received STEP: Destroying namespace "pod-network-test-7117" for this suite. • [SLOW TEST:22.426 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":177,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:10:46.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-d9682823-22ba-4b79-99ca-407bfc69c8bf in namespace container-probe-9329 Apr 10 21:10:51.008: INFO: Started pod liveness-d9682823-22ba-4b79-99ca-407bfc69c8bf in namespace container-probe-9329 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 21:10:51.010: INFO: Initial restart count of pod liveness-d9682823-22ba-4b79-99ca-407bfc69c8bf is 0 Apr 10 21:11:07.067: INFO: Restart count of pod container-probe-9329/liveness-d9682823-22ba-4b79-99ca-407bfc69c8bf is now 1 (16.057288477s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:11:07.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9329" for this suite. • [SLOW TEST:20.225 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":192,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:11:07.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:11:07.934: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:11:10.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149867, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149867, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149868, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722149867, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:11:13.119: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:11:13.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-201" for this suite. STEP: Destroying namespace "webhook-201-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.618 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":17,"skipped":202,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:11:13.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:11:13.799: INFO: Creating deployment "webserver-deployment" Apr 10 21:11:13.814: INFO: Waiting for observed generation 1 Apr 10 21:11:15.938: INFO: Waiting for all required pods to come up Apr 10 21:11:15.942: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 10 21:11:24.070: INFO: Waiting for deployment "webserver-deployment" to complete Apr 10 21:11:24.077: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 10 21:11:24.084: INFO: Updating deployment webserver-deployment Apr 10 21:11:24.084: INFO: Waiting for observed generation 2 Apr 10 21:11:26.090: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 10 21:11:26.092: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 10 21:11:26.095: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 10 21:11:26.103: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 10 21:11:26.103: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 10 21:11:26.106: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 10 21:11:26.109: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 10 21:11:26.109: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 10 21:11:26.114: INFO: Updating deployment webserver-deployment Apr 10 21:11:26.114: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 10 21:11:26.205: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 10 21:11:26.288: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 10 21:11:26.321: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9581 /apis/apps/v1/namespaces/deployment-9581/deployments/webserver-deployment 69a5d2aa-d4e9-46c5-a282-eb2666a162db 7032715 3 2020-04-10 21:11:13 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fed468 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-10 21:11:24 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-10 21:11:26 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 10 21:11:26.358: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9581 /apis/apps/v1/namespaces/deployment-9581/replicasets/webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 7032704 3 2020-04-10 21:11:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 69a5d2aa-d4e9-46c5-a282-eb2666a162db 0xc002fedb97 0xc002fedb98}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fedc28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:11:26.358: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 10 21:11:26.358: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9581 /apis/apps/v1/namespaces/deployment-9581/replicasets/webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 7032703 3 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 69a5d2aa-d4e9-46c5-a282-eb2666a162db 0xc002feda47 0xc002feda48}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fedb18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:11:26.476: INFO: Pod "webserver-deployment-595b5b9587-67q7b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-67q7b webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-67q7b 775658b5-50a9-4e5c-92e4-7c535ac57ce8 7032730 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa01e7 0xc002fa01e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.476: INFO: Pod "webserver-deployment-595b5b9587-bd4fz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bd4fz webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-bd4fz ea9f3588-cd34-43f5-81bb-1028d6535682 7032742 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa0437 0xc002fa0438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.476: INFO: Pod "webserver-deployment-595b5b9587-dp5xm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dp5xm webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-dp5xm dd82789d-d62c-4bd4-80ab-09ec75df2307 7032628 0 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa0637 0xc002fa0638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.124,StartTime:2020-04-10 21:11:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:11:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://15b04df15898aacd81510d30b801ddbd6d81186bd333928d5575d7d4d7b6efb0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.477: INFO: Pod "webserver-deployment-595b5b9587-fhk2z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fhk2z webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-fhk2z fdc9bc7f-77dd-4a24-9779-c0daa13cb535 7032718 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa0987 0xc002fa0988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.477: INFO: Pod "webserver-deployment-595b5b9587-hsl7z" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hsl7z webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-hsl7z 48d3b2f0-c408-4db9-9408-e264e831eb77 7032567 0 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa0bb7 0xc002fa0bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.21,StartTime:2020-04-10 21:11:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:11:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7ba969efd22af1ce1d1ea7f3b34d068f7a1fff3ecb6e51c6e0d25d8cff9644f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.478: INFO: Pod "webserver-deployment-595b5b9587-jlkdw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jlkdw webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-jlkdw b61f2f47-a64c-4f1e-9a17-d3c02f2bda4d 7032632 0 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa0df7 0xc002fa0df8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.126,StartTime:2020-04-10 21:11:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:11:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://83bf5d7c604ca1dc6fa1067d4e4532e42a81bc659e301ed3601825575feb7b8d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.478: INFO: Pod "webserver-deployment-595b5b9587-lplrr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lplrr webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-lplrr 4d3942fa-8219-4ba1-a775-6499d8c72aac 7032745 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa1037 0xc002fa1038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.478: INFO: Pod "webserver-deployment-595b5b9587-mjq7w" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mjq7w webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-mjq7w 89e304aa-1c2e-428b-83ce-ed4bdb77a7a6 7032723 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa1297 0xc002fa1298}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.478: INFO: Pod "webserver-deployment-595b5b9587-nsrwk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nsrwk webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-nsrwk 91e11f8a-6f41-4a57-b3f8-2c00ebe2b558 7032640 0 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa1497 0xc002fa1498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.25,StartTime:2020-04-10 21:11:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:11:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1cad66f36bd073e1aeab92b53d2ad5cbaf8c60145b447d204d39a18eedc30aba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.479: INFO: Pod "webserver-deployment-595b5b9587-pq8l2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pq8l2 webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-pq8l2 076e3378-777f-4320-9fe1-21cea509f056 7032737 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa1787 0xc002fa1788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.479: INFO: Pod "webserver-deployment-595b5b9587-q48vm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q48vm webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-q48vm c9d84689-fa1a-4d5c-913c-c9c47b9e3fbd 7032611 0 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa18d7 0xc002fa18d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.24,StartTime:2020-04-10 21:11:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:11:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e086c716687617365553248f88fdfb29335221134851de619f2c8ea585946627,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.479: INFO: Pod "webserver-deployment-595b5b9587-tpf5r" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tpf5r webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-tpf5r 05c7cf46-7d0a-4055-9ff9-3cede2016f34 7032596 0 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa1ae7 0xc002fa1ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.23,StartTime:2020-04-10 21:11:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:11:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c04778b6046439ea83686682bd91c4fbe122b5fe95734c264afa298e2ad5f775,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.479: INFO: Pod "webserver-deployment-595b5b9587-twfv6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-twfv6 webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-twfv6 2ef556d2-faa6-484c-84fa-708781d9e3ae 7032743 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa1d07 0xc002fa1d08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.480: INFO: Pod "webserver-deployment-595b5b9587-vbkwd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vbkwd webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-vbkwd 34ff022a-4bbe-4a71-b850-161be62dab08 7032584 0 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002fa1fb7 0xc002fa1fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.22,StartTime:2020-04-10 21:11:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:11:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7b5890ca3a15055b4696de828f53d0c4a06230235ea755f7d845937fc9891196,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.480: INFO: Pod "webserver-deployment-595b5b9587-vzqkd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vzqkd webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-vzqkd 6130fd55-d356-447c-96f9-5818b3fb51db 7032728 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002f7e247 0xc002f7e248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.480: INFO: Pod "webserver-deployment-595b5b9587-wdlxl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wdlxl webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-wdlxl 8d0c3cfc-3a39-4376-ba64-716622ee5440 7032749 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002f7e457 0xc002f7e458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.480: INFO: Pod "webserver-deployment-595b5b9587-wnb6h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wnb6h webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-wnb6h 4ca240fa-5d9f-4dfb-bc46-80f561c35b52 7032579 0 2020-04-10 21:11:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002f7e637 0xc002f7e638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.122,StartTime:2020-04-10 21:11:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:11:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4d931ba23af93fc28cc3b8082901e9b6a7d7269dceb96a9e2574005878f84ebd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.122,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.480: INFO: Pod "webserver-deployment-595b5b9587-x2mp5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x2mp5 webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-x2mp5 687ad454-c05c-4729-a25a-02d96eca1af5 7032752 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002f7e967 0xc002f7e968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.480: INFO: Pod "webserver-deployment-595b5b9587-xss4q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xss4q webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-xss4q f00b4cb9-9104-49fe-9a32-7d1674d3af43 7032707 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002f7eb47 0xc002f7eb48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.480: INFO: Pod "webserver-deployment-595b5b9587-zdkv7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zdkv7 webserver-deployment-595b5b9587- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-595b5b9587-zdkv7 e11cb1cf-c268-4237-99ce-d18c25494d95 7032739 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3c2ec903-f3fa-4de9-acfb-5f34068aca8c 0xc002f7ecb7 0xc002f7ecb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.480: INFO: Pod "webserver-deployment-c7997dcc8-4k92r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4k92r webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-4k92r 749ecef4-9e5e-4ef0-ae80-04d942e2e7c6 7032751 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7ee27 0xc002f7ee28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.481: INFO: Pod "webserver-deployment-c7997dcc8-7qzkj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7qzkj webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-7qzkj 9eb0e4fb-ab0d-416e-a665-397d754b5feb 7032683 0 2020-04-10 21:11:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7efb7 0xc002f7efb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-10 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.481: INFO: Pod "webserver-deployment-c7997dcc8-88wlg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-88wlg webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-88wlg b5d07603-4cfc-47b7-9d59-408c6ea386a2 7032716 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7f187 0xc002f7f188}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.481: INFO: Pod "webserver-deployment-c7997dcc8-8w22l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8w22l webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-8w22l 3db61c98-8c88-4f19-8513-eedafafd913b 7032740 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7f2e7 0xc002f7f2e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.481: INFO: Pod "webserver-deployment-c7997dcc8-dmcqm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dmcqm webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-dmcqm 957f26a0-a099-4b18-a605-961e14afdb7e 7032665 0 2020-04-10 21:11:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7f4a7 0xc002f7f4a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-10 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.481: INFO: Pod "webserver-deployment-c7997dcc8-gfc28" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gfc28 webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-gfc28 01403630-0dcb-42c9-8783-42ebe5eb2a34 7032753 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7f6e7 0xc002f7f6e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.481: INFO: Pod "webserver-deployment-c7997dcc8-jmctp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jmctp webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-jmctp 9370d24d-4e7c-4fc5-8c2d-4e655909351d 7032750 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7f8a7 0xc002f7f8a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.482: INFO: Pod "webserver-deployment-c7997dcc8-kblth" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kblth webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-kblth 707e0ee7-2112-4869-a2b5-4b666fd7e000 7032693 0 2020-04-10 21:11:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7fa47 0xc002f7fa48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-10 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.482: INFO: Pod "webserver-deployment-c7997dcc8-qj4nk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qj4nk webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-qj4nk 516fec8b-0341-4e88-a98e-67b6c40b17e3 7032748 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7fc57 0xc002f7fc58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.482: INFO: Pod "webserver-deployment-c7997dcc8-rcz5m" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rcz5m webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-rcz5m aba6911e-1eb8-4ded-829a-20ba6d5b91c7 7032694 0 2020-04-10 21:11:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7fdc7 0xc002f7fdc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-10 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.482: INFO: Pod "webserver-deployment-c7997dcc8-v22xp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v22xp webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-v22xp 176a0085-43dc-43dc-8195-08e992eda378 7032747 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f7ffd7 0xc002f7ffd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.482: INFO: Pod "webserver-deployment-c7997dcc8-x9q9n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x9q9n webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-x9q9n cb12a70a-7b61-48fb-aa3d-eba6f2f325ec 7032676 0 2020-04-10 21:11:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f641c0 0xc002f641c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-10 21:11:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 10 21:11:26.483: INFO: Pod "webserver-deployment-c7997dcc8-zs4ft" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zs4ft webserver-deployment-c7997dcc8- deployment-9581 /api/v1/namespaces/deployment-9581/pods/webserver-deployment-c7997dcc8-zs4ft 0023a192-ba1d-4740-a512-a49c85d2a41c 7032741 0 2020-04-10 21:11:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 001187ce-7297-48c8-ba8d-cafd06ead4df 0xc002f64457 0xc002f64458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgggf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgggf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:11:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:11:26.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9581" for this suite. • [SLOW TEST:12.911 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":18,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:11:26.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 10 21:11:27.259: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:27.289: INFO: Number of nodes with available pods: 0 Apr 10 21:11:27.289: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:28.294: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:28.297: INFO: Number of nodes with available pods: 0 Apr 10 21:11:28.297: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:29.580: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:29.583: INFO: Number of nodes with available pods: 0 Apr 10 21:11:29.583: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:30.309: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:30.325: INFO: Number of nodes with available pods: 0 Apr 10 21:11:30.325: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:31.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:31.361: INFO: Number of nodes with available pods: 0 Apr 10 21:11:31.361: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:32.324: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:32.572: INFO: Number of nodes with available pods: 0 Apr 10 21:11:32.572: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:33.419: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:33.581: INFO: Number of nodes with available pods: 0 Apr 10 21:11:33.581: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:34.324: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:34.381: INFO: Number of nodes with available pods: 0 Apr 10 21:11:34.381: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:35.443: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:36.071: INFO: Number of nodes with available pods: 0 Apr 10 21:11:36.072: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:36.618: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:36.928: INFO: Number of nodes with available pods: 0 Apr 10 21:11:36.928: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:37.454: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:37.685: INFO: Number of nodes with available pods: 0 Apr 10 21:11:37.685: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:38.401: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:38.404: INFO: Number of nodes with available pods: 0 Apr 10 21:11:38.404: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:39.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:39.597: INFO: Number of nodes with available pods: 0 Apr 10 21:11:39.597: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:40.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:40.642: INFO: Number of nodes with available pods: 0 Apr 10 21:11:40.642: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:41.323: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:41.331: INFO: Number of nodes with available pods: 0 Apr 10 21:11:41.331: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:11:42.857: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:42.926: INFO: Number of nodes with available pods: 2 Apr 10 21:11:42.926: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 10 21:11:43.418: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:43.612: INFO: Number of nodes with available pods: 1 Apr 10 21:11:43.612: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:44.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:44.620: INFO: Number of nodes with available pods: 1 Apr 10 21:11:44.620: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:45.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:45.618: INFO: Number of nodes with available pods: 1 Apr 10 21:11:45.618: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:46.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:46.619: INFO: Number of nodes with available pods: 1 Apr 10 21:11:46.619: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:47.754: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:47.758: INFO: Number of nodes with available pods: 1 Apr 10 21:11:47.758: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:48.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:48.620: INFO: Number of nodes with available pods: 1 Apr 10 21:11:48.620: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:49.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:49.678: INFO: Number of nodes with available pods: 1 Apr 10 21:11:49.678: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:50.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:50.621: INFO: Number of nodes with available pods: 1 Apr 10 21:11:50.621: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:51.632: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:51.649: INFO: Number of nodes with available pods: 1 Apr 10 21:11:51.649: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:52.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:52.620: INFO: Number of nodes with available pods: 1 Apr 10 21:11:52.620: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:53.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:53.620: INFO: Number of nodes with available pods: 1 Apr 10 21:11:53.620: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:54.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:54.621: INFO: Number of nodes with available pods: 1 Apr 10 21:11:54.621: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:11:55.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:11:55.620: INFO: Number of nodes with available pods: 2 Apr 10 21:11:55.621: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9741, will wait for the garbage collector to delete the pods Apr 10 21:11:55.683: INFO: Deleting DaemonSet.extensions daemon-set took: 7.010552ms Apr 10 21:11:55.784: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.246803ms Apr 10 21:12:09.587: INFO: Number of nodes with available pods: 0 Apr 10 21:12:09.587: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 21:12:09.593: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9741/daemonsets","resourceVersion":"7033180"},"items":null} Apr 10 21:12:09.596: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9741/pods","resourceVersion":"7033180"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:12:09.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9741" for this suite. • [SLOW TEST:42.955 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":19,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:12:09.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5204, will wait for the garbage collector to delete the pods Apr 10 21:12:15.806: INFO: Deleting Job.batch foo took: 5.791013ms Apr 10 21:12:16.207: INFO: Terminating Job.batch foo pods took: 400.275788ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:12:59.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5204" for this suite. • [SLOW TEST:49.903 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":20,"skipped":255,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:12:59.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 10 21:13:04.146: INFO: Successfully updated pod "annotationupdatef721609e-3cfa-4cd1-a196-9e766a4c9115" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:06.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6727" for this suite. • [SLOW TEST:6.658 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":257,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:06.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 10 21:13:06.244: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 10 21:13:06.252: INFO: Waiting for terminating namespaces to be deleted... Apr 10 21:13:06.255: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 10 21:13:06.260: INFO: annotationupdatef721609e-3cfa-4cd1-a196-9e766a4c9115 from downward-api-6727 started at 2020-04-10 21:12:59 +0000 UTC (1 container statuses recorded) Apr 10 21:13:06.260: INFO: Container client-container ready: true, restart count 0 Apr 10 21:13:06.260: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:06.260: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 21:13:06.260: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:06.260: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 21:13:06.260: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 10 21:13:06.273: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 10 21:13:06.273: INFO: Container kube-bench ready: false, restart count 0 Apr 10 21:13:06.273: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:06.273: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 21:13:06.273: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:06.273: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 21:13:06.273: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:06.273: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160491b78e2215d2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:07.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-534" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":22,"skipped":270,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:07.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:13:07.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8914882e-7265-4c43-bc6e-a0e243a75f6e" in namespace "downward-api-1312" to be "success or failure" Apr 10 21:13:07.425: INFO: Pod "downwardapi-volume-8914882e-7265-4c43-bc6e-a0e243a75f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.141006ms Apr 10 21:13:09.436: INFO: Pod "downwardapi-volume-8914882e-7265-4c43-bc6e-a0e243a75f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020598713s Apr 10 21:13:11.450: INFO: Pod "downwardapi-volume-8914882e-7265-4c43-bc6e-a0e243a75f6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034964891s STEP: Saw pod success Apr 10 21:13:11.450: INFO: Pod "downwardapi-volume-8914882e-7265-4c43-bc6e-a0e243a75f6e" satisfied condition "success or failure" Apr 10 21:13:11.453: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8914882e-7265-4c43-bc6e-a0e243a75f6e container client-container: STEP: delete the pod Apr 10 21:13:11.484: INFO: Waiting for pod downwardapi-volume-8914882e-7265-4c43-bc6e-a0e243a75f6e to disappear Apr 10 21:13:11.489: INFO: Pod downwardapi-volume-8914882e-7265-4c43-bc6e-a0e243a75f6e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:11.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1312" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:11.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 10 21:13:11.822: INFO: Waiting up to 5m0s for pod "pod-8f3bd45a-808d-414d-990b-5a427eb9839f" in namespace "emptydir-3526" to be "success or failure" Apr 10 21:13:11.836: INFO: Pod "pod-8f3bd45a-808d-414d-990b-5a427eb9839f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.047044ms Apr 10 21:13:13.839: INFO: Pod "pod-8f3bd45a-808d-414d-990b-5a427eb9839f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017534354s Apr 10 21:13:15.844: INFO: Pod "pod-8f3bd45a-808d-414d-990b-5a427eb9839f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022191011s STEP: Saw pod success Apr 10 21:13:15.844: INFO: Pod "pod-8f3bd45a-808d-414d-990b-5a427eb9839f" satisfied condition "success or failure" Apr 10 21:13:15.847: INFO: Trying to get logs from node jerma-worker2 pod pod-8f3bd45a-808d-414d-990b-5a427eb9839f container test-container: STEP: delete the pod Apr 10 21:13:15.900: INFO: Waiting for pod pod-8f3bd45a-808d-414d-990b-5a427eb9839f to disappear Apr 10 21:13:15.908: INFO: Pod pod-8f3bd45a-808d-414d-990b-5a427eb9839f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:15.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3526" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:15.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6327.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6327.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6327.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6327.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6327.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6327.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 21:13:22.080: INFO: DNS probes using dns-6327/dns-test-b223063f-a710-4407-b335-61a11e054473 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:22.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6327" for this suite. • [SLOW TEST:6.197 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":25,"skipped":378,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:22.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:33.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3018" for this suite. • [SLOW TEST:11.169 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":26,"skipped":388,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:33.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 10 21:13:33.361: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 10 21:13:33.392: INFO: Waiting for terminating namespaces to be deleted... Apr 10 21:13:33.394: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 10 21:13:33.400: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:33.400: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 21:13:33.400: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:33.400: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 21:13:33.400: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 10 21:13:33.405: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:33.405: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 21:13:33.405: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 10 21:13:33.405: INFO: Container kube-bench ready: false, restart count 0 Apr 10 21:13:33.405: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:33.405: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 21:13:33.405: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 10 21:13:33.405: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 10 21:13:33.497: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 10 21:13:33.497: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 10 21:13:33.497: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 10 21:13:33.497: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 10 21:13:33.497: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 10 21:13:33.503: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-481136fe-bfb7-455a-b821-b67f5a09936f.160491bde4d9c380], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4005/filler-pod-481136fe-bfb7-455a-b821-b67f5a09936f to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-481136fe-bfb7-455a-b821-b67f5a09936f.160491be5e1a981b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-481136fe-bfb7-455a-b821-b67f5a09936f.160491be8673a1a4], Reason = [Created], Message = [Created container filler-pod-481136fe-bfb7-455a-b821-b67f5a09936f] STEP: Considering event: Type = [Normal], Name = [filler-pod-481136fe-bfb7-455a-b821-b67f5a09936f.160491be9632cec6], Reason = [Started], Message = [Started container filler-pod-481136fe-bfb7-455a-b821-b67f5a09936f] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c446c6b-4eb5-4696-af19-50c91a573043.160491bde4814cb1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4005/filler-pod-9c446c6b-4eb5-4696-af19-50c91a573043 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c446c6b-4eb5-4696-af19-50c91a573043.160491be2f236777], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c446c6b-4eb5-4696-af19-50c91a573043.160491be61fed286], Reason = [Created], Message = [Created container filler-pod-9c446c6b-4eb5-4696-af19-50c91a573043] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c446c6b-4eb5-4696-af19-50c91a573043.160491be780d9e1d], Reason = [Started], Message = [Started container filler-pod-9c446c6b-4eb5-4696-af19-50c91a573043] STEP: Considering event: Type = [Warning], Name = [additional-pod.160491bed45a2fb8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:38.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4005" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.355 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":27,"skipped":394,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:38.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c0f2c0d4-5eba-4099-b9c4-ad021f64c3ee STEP: Creating a pod to test consume secrets Apr 10 21:13:38.775: INFO: Waiting up to 5m0s for pod "pod-secrets-3bbfc5d6-39d8-43dc-943a-dfbd4f95904b" in namespace "secrets-1945" to be "success or failure" Apr 10 21:13:38.784: INFO: Pod "pod-secrets-3bbfc5d6-39d8-43dc-943a-dfbd4f95904b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767569ms Apr 10 21:13:40.788: INFO: Pod "pod-secrets-3bbfc5d6-39d8-43dc-943a-dfbd4f95904b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012613392s Apr 10 21:13:42.793: INFO: Pod "pod-secrets-3bbfc5d6-39d8-43dc-943a-dfbd4f95904b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017063358s STEP: Saw pod success Apr 10 21:13:42.793: INFO: Pod "pod-secrets-3bbfc5d6-39d8-43dc-943a-dfbd4f95904b" satisfied condition "success or failure" Apr 10 21:13:42.796: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-3bbfc5d6-39d8-43dc-943a-dfbd4f95904b container secret-env-test: STEP: delete the pod Apr 10 21:13:42.829: INFO: Waiting for pod pod-secrets-3bbfc5d6-39d8-43dc-943a-dfbd4f95904b to disappear Apr 10 21:13:42.843: INFO: Pod pod-secrets-3bbfc5d6-39d8-43dc-943a-dfbd4f95904b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1945" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":408,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:42.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-35ef93c9-bc73-45f9-a9ce-9c0a525b0c1b STEP: Creating a pod to test consume secrets Apr 10 21:13:42.939: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8562719c-38cf-4506-8b90-104cfa4f430a" in namespace "projected-4159" to be "success or failure" Apr 10 21:13:42.945: INFO: Pod "pod-projected-secrets-8562719c-38cf-4506-8b90-104cfa4f430a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.873887ms Apr 10 21:13:45.078: INFO: Pod "pod-projected-secrets-8562719c-38cf-4506-8b90-104cfa4f430a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139119537s Apr 10 21:13:47.083: INFO: Pod "pod-projected-secrets-8562719c-38cf-4506-8b90-104cfa4f430a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143195139s STEP: Saw pod success Apr 10 21:13:47.083: INFO: Pod "pod-projected-secrets-8562719c-38cf-4506-8b90-104cfa4f430a" satisfied condition "success or failure" Apr 10 21:13:47.085: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8562719c-38cf-4506-8b90-104cfa4f430a container projected-secret-volume-test: STEP: delete the pod Apr 10 21:13:47.126: INFO: Waiting for pod pod-projected-secrets-8562719c-38cf-4506-8b90-104cfa4f430a to disappear Apr 10 21:13:47.138: INFO: Pod pod-projected-secrets-8562719c-38cf-4506-8b90-104cfa4f430a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:13:47.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4159" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:13:47.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 10 21:13:47.202: INFO: PodSpec: initContainers in spec.initContainers Apr 10 21:14:35.157: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-13ffb14c-a0de-4cba-b929-dc5ecc092246", GenerateName:"", Namespace:"init-container-9218", SelfLink:"/api/v1/namespaces/init-container-9218/pods/pod-init-13ffb14c-a0de-4cba-b929-dc5ecc092246", UID:"b487107b-0830-42f0-af4e-79cfef818df8", ResourceVersion:"7033962", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722150027, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"202943378"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6rknd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002091cc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6rknd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6rknd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6rknd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0031c5fd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023df080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003128060)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003128080)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003128088), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00312808c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150027, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150027, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150027, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150027, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.44", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.44"}}, StartTime:(*v1.Time)(0xc002076720), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00236ac40)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00236acb0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://26f6be12a0b7e20ec5953508d5cb7018605de22492d2fc83b4d995a55c346693", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002076760), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002076740), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00312810f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:14:35.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9218" for this suite. • [SLOW TEST:48.045 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":30,"skipped":445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:14:35.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 10 21:14:35.254: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:14:42.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5534" for this suite. • [SLOW TEST:7.017 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":31,"skipped":508,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:14:42.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-5f263a90-cb15-4258-aa6c-1e5848ea6318 STEP: Creating a pod to test consume configMaps Apr 10 21:14:42.452: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2b809ce0-7759-449a-8b3f-6e6c8924b482" in namespace "projected-9170" to be "success or failure" Apr 10 21:14:42.468: INFO: Pod "pod-projected-configmaps-2b809ce0-7759-449a-8b3f-6e6c8924b482": Phase="Pending", Reason="", readiness=false. Elapsed: 15.317103ms Apr 10 21:14:44.472: INFO: Pod "pod-projected-configmaps-2b809ce0-7759-449a-8b3f-6e6c8924b482": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019226767s Apr 10 21:14:46.475: INFO: Pod "pod-projected-configmaps-2b809ce0-7759-449a-8b3f-6e6c8924b482": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023214754s STEP: Saw pod success Apr 10 21:14:46.476: INFO: Pod "pod-projected-configmaps-2b809ce0-7759-449a-8b3f-6e6c8924b482" satisfied condition "success or failure" Apr 10 21:14:46.479: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-2b809ce0-7759-449a-8b3f-6e6c8924b482 container projected-configmap-volume-test: STEP: delete the pod Apr 10 21:14:46.499: INFO: Waiting for pod pod-projected-configmaps-2b809ce0-7759-449a-8b3f-6e6c8924b482 to disappear Apr 10 21:14:46.504: INFO: Pod pod-projected-configmaps-2b809ce0-7759-449a-8b3f-6e6c8924b482 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:14:46.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9170" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:14:46.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4000 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4000 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4000 Apr 10 21:14:46.630: INFO: Found 0 stateful pods, waiting for 1 Apr 10 21:14:56.635: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 10 21:14:56.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4000 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:14:56.901: INFO: stderr: "I0410 21:14:56.782300 152 log.go:172] (0xc0003c2e70) (0xc000645f40) Create stream\nI0410 21:14:56.782351 152 log.go:172] (0xc0003c2e70) (0xc000645f40) Stream added, broadcasting: 1\nI0410 21:14:56.784736 152 log.go:172] (0xc0003c2e70) Reply frame received for 1\nI0410 21:14:56.784778 152 log.go:172] (0xc0003c2e70) (0xc0005b6820) Create stream\nI0410 21:14:56.784800 152 log.go:172] (0xc0003c2e70) (0xc0005b6820) Stream added, broadcasting: 3\nI0410 21:14:56.786059 152 log.go:172] (0xc0003c2e70) Reply frame received for 3\nI0410 21:14:56.786117 152 log.go:172] (0xc0003c2e70) (0xc0007875e0) Create stream\nI0410 21:14:56.786136 152 log.go:172] (0xc0003c2e70) (0xc0007875e0) Stream added, broadcasting: 5\nI0410 21:14:56.787153 152 log.go:172] (0xc0003c2e70) Reply frame received for 5\nI0410 21:14:56.863132 152 log.go:172] (0xc0003c2e70) Data frame received for 5\nI0410 21:14:56.863153 152 log.go:172] (0xc0007875e0) (5) Data frame handling\nI0410 21:14:56.863165 152 log.go:172] (0xc0007875e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:14:56.892262 152 log.go:172] (0xc0003c2e70) Data frame received for 5\nI0410 21:14:56.892296 152 log.go:172] (0xc0007875e0) (5) Data frame handling\nI0410 21:14:56.892351 152 log.go:172] (0xc0003c2e70) Data frame received for 3\nI0410 21:14:56.892376 152 log.go:172] (0xc0005b6820) (3) Data frame handling\nI0410 21:14:56.892408 152 log.go:172] (0xc0005b6820) (3) Data frame sent\nI0410 21:14:56.892430 152 log.go:172] (0xc0003c2e70) Data frame received for 3\nI0410 21:14:56.892457 152 log.go:172] (0xc0005b6820) (3) Data frame handling\nI0410 21:14:56.894754 152 log.go:172] (0xc0003c2e70) Data frame received for 1\nI0410 21:14:56.894788 152 log.go:172] (0xc000645f40) (1) Data frame handling\nI0410 21:14:56.894821 152 log.go:172] (0xc000645f40) (1) Data frame sent\nI0410 21:14:56.894848 152 log.go:172] (0xc0003c2e70) (0xc000645f40) Stream removed, broadcasting: 1\nI0410 21:14:56.894867 152 log.go:172] (0xc0003c2e70) Go away received\nI0410 21:14:56.895355 152 log.go:172] (0xc0003c2e70) (0xc000645f40) Stream removed, broadcasting: 1\nI0410 21:14:56.895382 152 log.go:172] (0xc0003c2e70) (0xc0005b6820) Stream removed, broadcasting: 3\nI0410 21:14:56.895395 152 log.go:172] (0xc0003c2e70) (0xc0007875e0) Stream removed, broadcasting: 5\n" Apr 10 21:14:56.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:14:56.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:14:56.906: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 10 21:15:06.911: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 10 21:15:06.911: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:15:06.932: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999609s Apr 10 21:15:07.936: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988439921s Apr 10 21:15:08.940: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984444672s Apr 10 21:15:09.944: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.980188852s Apr 10 21:15:10.949: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.975681649s Apr 10 21:15:11.954: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.970698555s Apr 10 21:15:12.959: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.965672762s Apr 10 21:15:13.963: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.961248133s Apr 10 21:15:14.968: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.957070859s Apr 10 21:15:15.972: INFO: Verifying statefulset ss doesn't scale past 1 for another 952.427632ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4000 Apr 10 21:15:16.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4000 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:15:17.214: INFO: stderr: "I0410 21:15:17.115630 175 log.go:172] (0xc00023ca50) (0xc0008e4000) Create stream\nI0410 21:15:17.115745 175 log.go:172] (0xc00023ca50) (0xc0008e4000) Stream added, broadcasting: 1\nI0410 21:15:17.118306 175 log.go:172] (0xc00023ca50) Reply frame received for 1\nI0410 21:15:17.118346 175 log.go:172] (0xc00023ca50) (0xc0006f3ae0) Create stream\nI0410 21:15:17.118361 175 log.go:172] (0xc00023ca50) (0xc0006f3ae0) Stream added, broadcasting: 3\nI0410 21:15:17.121528 175 log.go:172] (0xc00023ca50) Reply frame received for 3\nI0410 21:15:17.121554 175 log.go:172] (0xc00023ca50) (0xc0006f3cc0) Create stream\nI0410 21:15:17.121566 175 log.go:172] (0xc00023ca50) (0xc0006f3cc0) Stream added, broadcasting: 5\nI0410 21:15:17.122634 175 log.go:172] (0xc00023ca50) Reply frame received for 5\nI0410 21:15:17.206137 175 log.go:172] (0xc00023ca50) Data frame received for 5\nI0410 21:15:17.206176 175 log.go:172] (0xc0006f3cc0) (5) Data frame handling\nI0410 21:15:17.206206 175 log.go:172] (0xc0006f3cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0410 21:15:17.206674 175 log.go:172] (0xc00023ca50) Data frame received for 3\nI0410 21:15:17.206707 175 log.go:172] (0xc0006f3ae0) (3) Data frame handling\nI0410 21:15:17.206717 175 log.go:172] (0xc0006f3ae0) (3) Data frame sent\nI0410 21:15:17.207073 175 log.go:172] (0xc00023ca50) Data frame received for 5\nI0410 21:15:17.207088 175 log.go:172] (0xc0006f3cc0) (5) Data frame handling\nI0410 21:15:17.207705 175 log.go:172] (0xc00023ca50) Data frame received for 3\nI0410 21:15:17.207732 175 log.go:172] (0xc0006f3ae0) (3) Data frame handling\nI0410 21:15:17.209824 175 log.go:172] (0xc00023ca50) Data frame received for 1\nI0410 21:15:17.209843 175 log.go:172] (0xc0008e4000) (1) Data frame handling\nI0410 21:15:17.209866 175 log.go:172] (0xc0008e4000) (1) Data frame sent\nI0410 21:15:17.209895 175 log.go:172] (0xc00023ca50) (0xc0008e4000) Stream removed, broadcasting: 1\nI0410 21:15:17.209925 175 log.go:172] (0xc00023ca50) Go away received\nI0410 21:15:17.210420 175 log.go:172] (0xc00023ca50) (0xc0008e4000) Stream removed, broadcasting: 1\nI0410 21:15:17.210450 175 log.go:172] (0xc00023ca50) (0xc0006f3ae0) Stream removed, broadcasting: 3\nI0410 21:15:17.210468 175 log.go:172] (0xc00023ca50) (0xc0006f3cc0) Stream removed, broadcasting: 5\n" Apr 10 21:15:17.214: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:15:17.214: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 10 21:15:17.217: INFO: Found 1 stateful pods, waiting for 3 Apr 10 21:15:27.222: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:15:27.222: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:15:27.222: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 10 21:15:27.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4000 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:15:27.459: INFO: stderr: "I0410 21:15:27.351554 196 log.go:172] (0xc000116370) (0xc000a36000) Create stream\nI0410 21:15:27.351620 196 log.go:172] (0xc000116370) (0xc000a36000) Stream added, broadcasting: 1\nI0410 21:15:27.354642 196 log.go:172] (0xc000116370) Reply frame received for 1\nI0410 21:15:27.354687 196 log.go:172] (0xc000116370) (0xc00073dae0) Create stream\nI0410 21:15:27.354717 196 log.go:172] (0xc000116370) (0xc00073dae0) Stream added, broadcasting: 3\nI0410 21:15:27.355709 196 log.go:172] (0xc000116370) Reply frame received for 3\nI0410 21:15:27.355765 196 log.go:172] (0xc000116370) (0xc0002c4000) Create stream\nI0410 21:15:27.355784 196 log.go:172] (0xc000116370) (0xc0002c4000) Stream added, broadcasting: 5\nI0410 21:15:27.356667 196 log.go:172] (0xc000116370) Reply frame received for 5\nI0410 21:15:27.453438 196 log.go:172] (0xc000116370) Data frame received for 5\nI0410 21:15:27.453477 196 log.go:172] (0xc0002c4000) (5) Data frame handling\nI0410 21:15:27.453514 196 log.go:172] (0xc000116370) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:15:27.453545 196 log.go:172] (0xc00073dae0) (3) Data frame handling\nI0410 21:15:27.453562 196 log.go:172] (0xc00073dae0) (3) Data frame sent\nI0410 21:15:27.453576 196 log.go:172] (0xc000116370) Data frame received for 3\nI0410 21:15:27.453590 196 log.go:172] (0xc00073dae0) (3) Data frame handling\nI0410 21:15:27.453608 196 log.go:172] (0xc0002c4000) (5) Data frame sent\nI0410 21:15:27.453624 196 log.go:172] (0xc000116370) Data frame received for 5\nI0410 21:15:27.453636 196 log.go:172] (0xc0002c4000) (5) Data frame handling\nI0410 21:15:27.454847 196 log.go:172] (0xc000116370) Data frame received for 1\nI0410 21:15:27.454959 196 log.go:172] (0xc000a36000) (1) Data frame handling\nI0410 21:15:27.455009 196 log.go:172] (0xc000a36000) (1) Data frame sent\nI0410 21:15:27.455031 196 log.go:172] (0xc000116370) (0xc000a36000) Stream removed, broadcasting: 1\nI0410 21:15:27.455060 196 log.go:172] (0xc000116370) Go away received\nI0410 21:15:27.455498 196 log.go:172] (0xc000116370) (0xc000a36000) Stream removed, broadcasting: 1\nI0410 21:15:27.455526 196 log.go:172] (0xc000116370) (0xc00073dae0) Stream removed, broadcasting: 3\nI0410 21:15:27.455545 196 log.go:172] (0xc000116370) (0xc0002c4000) Stream removed, broadcasting: 5\n" Apr 10 21:15:27.459: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:15:27.459: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:15:27.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4000 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:15:27.708: INFO: stderr: "I0410 21:15:27.602654 217 log.go:172] (0xc0009db290) (0xc00099e500) Create stream\nI0410 21:15:27.602727 217 log.go:172] (0xc0009db290) (0xc00099e500) Stream added, broadcasting: 1\nI0410 21:15:27.608713 217 log.go:172] (0xc0009db290) Reply frame received for 1\nI0410 21:15:27.608745 217 log.go:172] (0xc0009db290) (0xc0006ac780) Create stream\nI0410 21:15:27.608764 217 log.go:172] (0xc0009db290) (0xc0006ac780) Stream added, broadcasting: 3\nI0410 21:15:27.609861 217 log.go:172] (0xc0009db290) Reply frame received for 3\nI0410 21:15:27.609888 217 log.go:172] (0xc0009db290) (0xc0004af540) Create stream\nI0410 21:15:27.609896 217 log.go:172] (0xc0009db290) (0xc0004af540) Stream added, broadcasting: 5\nI0410 21:15:27.610600 217 log.go:172] (0xc0009db290) Reply frame received for 5\nI0410 21:15:27.675378 217 log.go:172] (0xc0009db290) Data frame received for 5\nI0410 21:15:27.675408 217 log.go:172] (0xc0004af540) (5) Data frame handling\nI0410 21:15:27.675429 217 log.go:172] (0xc0004af540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:15:27.700781 217 log.go:172] (0xc0009db290) Data frame received for 3\nI0410 21:15:27.700810 217 log.go:172] (0xc0006ac780) (3) Data frame handling\nI0410 21:15:27.700833 217 log.go:172] (0xc0006ac780) (3) Data frame sent\nI0410 21:15:27.701418 217 log.go:172] (0xc0009db290) Data frame received for 5\nI0410 21:15:27.701493 217 log.go:172] (0xc0004af540) (5) Data frame handling\nI0410 21:15:27.701534 217 log.go:172] (0xc0009db290) Data frame received for 3\nI0410 21:15:27.701569 217 log.go:172] (0xc0006ac780) (3) Data frame handling\nI0410 21:15:27.703254 217 log.go:172] (0xc0009db290) Data frame received for 1\nI0410 21:15:27.703267 217 log.go:172] (0xc00099e500) (1) Data frame handling\nI0410 21:15:27.703281 217 log.go:172] (0xc00099e500) (1) Data frame sent\nI0410 21:15:27.703293 217 log.go:172] (0xc0009db290) (0xc00099e500) Stream removed, broadcasting: 1\nI0410 21:15:27.703371 217 log.go:172] (0xc0009db290) Go away received\nI0410 21:15:27.703589 217 log.go:172] (0xc0009db290) (0xc00099e500) Stream removed, broadcasting: 1\nI0410 21:15:27.703610 217 log.go:172] (0xc0009db290) (0xc0006ac780) Stream removed, broadcasting: 3\nI0410 21:15:27.703622 217 log.go:172] (0xc0009db290) (0xc0004af540) Stream removed, broadcasting: 5\n" Apr 10 21:15:27.708: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:15:27.708: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:15:27.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4000 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:15:27.953: INFO: stderr: "I0410 21:15:27.837604 237 log.go:172] (0xc0009d2160) (0xc0006f6500) Create stream\nI0410 21:15:27.837673 237 log.go:172] (0xc0009d2160) (0xc0006f6500) Stream added, broadcasting: 1\nI0410 21:15:27.840588 237 log.go:172] (0xc0009d2160) Reply frame received for 1\nI0410 21:15:27.840653 237 log.go:172] (0xc0009d2160) (0xc0006f65a0) Create stream\nI0410 21:15:27.840679 237 log.go:172] (0xc0009d2160) (0xc0006f65a0) Stream added, broadcasting: 3\nI0410 21:15:27.841898 237 log.go:172] (0xc0009d2160) Reply frame received for 3\nI0410 21:15:27.842350 237 log.go:172] (0xc0009d2160) (0xc000812000) Create stream\nI0410 21:15:27.842419 237 log.go:172] (0xc0009d2160) (0xc000812000) Stream added, broadcasting: 5\nI0410 21:15:27.845306 237 log.go:172] (0xc0009d2160) Reply frame received for 5\nI0410 21:15:27.904990 237 log.go:172] (0xc0009d2160) Data frame received for 5\nI0410 21:15:27.905037 237 log.go:172] (0xc000812000) (5) Data frame handling\nI0410 21:15:27.905071 237 log.go:172] (0xc000812000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:15:27.946337 237 log.go:172] (0xc0009d2160) Data frame received for 3\nI0410 21:15:27.946418 237 log.go:172] (0xc0006f65a0) (3) Data frame handling\nI0410 21:15:27.946440 237 log.go:172] (0xc0006f65a0) (3) Data frame sent\nI0410 21:15:27.946454 237 log.go:172] (0xc0009d2160) Data frame received for 5\nI0410 21:15:27.946473 237 log.go:172] (0xc000812000) (5) Data frame handling\nI0410 21:15:27.946508 237 log.go:172] (0xc0009d2160) Data frame received for 3\nI0410 21:15:27.946519 237 log.go:172] (0xc0006f65a0) (3) Data frame handling\nI0410 21:15:27.949062 237 log.go:172] (0xc0009d2160) Data frame received for 1\nI0410 21:15:27.949090 237 log.go:172] (0xc0006f6500) (1) Data frame handling\nI0410 21:15:27.949282 237 log.go:172] (0xc0006f6500) (1) Data frame sent\nI0410 21:15:27.949321 237 log.go:172] (0xc0009d2160) (0xc0006f6500) Stream removed, broadcasting: 1\nI0410 21:15:27.949358 237 log.go:172] (0xc0009d2160) Go away received\nI0410 21:15:27.949609 237 log.go:172] (0xc0009d2160) (0xc0006f6500) Stream removed, broadcasting: 1\nI0410 21:15:27.949630 237 log.go:172] (0xc0009d2160) (0xc0006f65a0) Stream removed, broadcasting: 3\nI0410 21:15:27.949640 237 log.go:172] (0xc0009d2160) (0xc000812000) Stream removed, broadcasting: 5\n" Apr 10 21:15:27.953: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:15:27.953: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:15:27.953: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:15:27.956: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 10 21:15:37.965: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 10 21:15:37.965: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 10 21:15:37.965: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 10 21:15:37.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999302s Apr 10 21:15:38.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986405966s Apr 10 21:15:40.020: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98188871s Apr 10 21:15:41.025: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950859294s Apr 10 21:15:42.030: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.94625846s Apr 10 21:15:43.035: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.941262059s Apr 10 21:15:44.056: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.936317527s Apr 10 21:15:45.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.915012565s Apr 10 21:15:46.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.909684279s Apr 10 21:15:47.073: INFO: Verifying statefulset ss doesn't scale past 3 for another 903.293003ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4000 Apr 10 21:15:48.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4000 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:15:48.285: INFO: stderr: "I0410 21:15:48.218890 257 log.go:172] (0xc000105760) (0xc00062da40) Create stream\nI0410 21:15:48.218975 257 log.go:172] (0xc000105760) (0xc00062da40) Stream added, broadcasting: 1\nI0410 21:15:48.221939 257 log.go:172] (0xc000105760) Reply frame received for 1\nI0410 21:15:48.221984 257 log.go:172] (0xc000105760) (0xc0007da000) Create stream\nI0410 21:15:48.221997 257 log.go:172] (0xc000105760) (0xc0007da000) Stream added, broadcasting: 3\nI0410 21:15:48.222947 257 log.go:172] (0xc000105760) Reply frame received for 3\nI0410 21:15:48.222994 257 log.go:172] (0xc000105760) (0xc000588000) Create stream\nI0410 21:15:48.223012 257 log.go:172] (0xc000105760) (0xc000588000) Stream added, broadcasting: 5\nI0410 21:15:48.224047 257 log.go:172] (0xc000105760) Reply frame received for 5\nI0410 21:15:48.277884 257 log.go:172] (0xc000105760) Data frame received for 5\nI0410 21:15:48.277910 257 log.go:172] (0xc000588000) (5) Data frame handling\nI0410 21:15:48.277917 257 log.go:172] (0xc000588000) (5) Data frame sent\nI0410 21:15:48.277923 257 log.go:172] (0xc000105760) Data frame received for 5\nI0410 21:15:48.277927 257 log.go:172] (0xc000588000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0410 21:15:48.277981 257 log.go:172] (0xc000105760) Data frame received for 3\nI0410 21:15:48.278014 257 log.go:172] (0xc0007da000) (3) Data frame handling\nI0410 21:15:48.278037 257 log.go:172] (0xc0007da000) (3) Data frame sent\nI0410 21:15:48.278197 257 log.go:172] (0xc000105760) Data frame received for 3\nI0410 21:15:48.278228 257 log.go:172] (0xc0007da000) (3) Data frame handling\nI0410 21:15:48.279789 257 log.go:172] (0xc000105760) Data frame received for 1\nI0410 21:15:48.279811 257 log.go:172] (0xc00062da40) (1) Data frame handling\nI0410 21:15:48.279844 257 log.go:172] (0xc00062da40) (1) Data frame sent\nI0410 21:15:48.279877 257 log.go:172] (0xc000105760) (0xc00062da40) Stream removed, broadcasting: 1\nI0410 21:15:48.279919 257 log.go:172] (0xc000105760) Go away received\nI0410 21:15:48.280345 257 log.go:172] (0xc000105760) (0xc00062da40) Stream removed, broadcasting: 1\nI0410 21:15:48.280366 257 log.go:172] (0xc000105760) (0xc0007da000) Stream removed, broadcasting: 3\nI0410 21:15:48.280377 257 log.go:172] (0xc000105760) (0xc000588000) Stream removed, broadcasting: 5\n" Apr 10 21:15:48.285: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:15:48.285: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 10 21:15:48.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4000 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:15:48.497: INFO: stderr: "I0410 21:15:48.416085 279 log.go:172] (0xc000998b00) (0xc000528000) Create stream\nI0410 21:15:48.416145 279 log.go:172] (0xc000998b00) (0xc000528000) Stream added, broadcasting: 1\nI0410 21:15:48.418638 279 log.go:172] (0xc000998b00) Reply frame received for 1\nI0410 21:15:48.418690 279 log.go:172] (0xc000998b00) (0xc000687b80) Create stream\nI0410 21:15:48.418713 279 log.go:172] (0xc000998b00) (0xc000687b80) Stream added, broadcasting: 3\nI0410 21:15:48.419692 279 log.go:172] (0xc000998b00) Reply frame received for 3\nI0410 21:15:48.419720 279 log.go:172] (0xc000998b00) (0xc000528140) Create stream\nI0410 21:15:48.419732 279 log.go:172] (0xc000998b00) (0xc000528140) Stream added, broadcasting: 5\nI0410 21:15:48.420755 279 log.go:172] (0xc000998b00) Reply frame received for 5\nI0410 21:15:48.490840 279 log.go:172] (0xc000998b00) Data frame received for 3\nI0410 21:15:48.490869 279 log.go:172] (0xc000687b80) (3) Data frame handling\nI0410 21:15:48.490878 279 log.go:172] (0xc000687b80) (3) Data frame sent\nI0410 21:15:48.490905 279 log.go:172] (0xc000998b00) Data frame received for 5\nI0410 21:15:48.490936 279 log.go:172] (0xc000528140) (5) Data frame handling\nI0410 21:15:48.490971 279 log.go:172] (0xc000528140) (5) Data frame sent\nI0410 21:15:48.491001 279 log.go:172] (0xc000998b00) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0410 21:15:48.491023 279 log.go:172] (0xc000528140) (5) Data frame handling\nI0410 21:15:48.491071 279 log.go:172] (0xc000998b00) Data frame received for 3\nI0410 21:15:48.491102 279 log.go:172] (0xc000687b80) (3) Data frame handling\nI0410 21:15:48.492557 279 log.go:172] (0xc000998b00) Data frame received for 1\nI0410 21:15:48.492587 279 log.go:172] (0xc000528000) (1) Data frame handling\nI0410 21:15:48.492617 279 log.go:172] (0xc000528000) (1) Data frame sent\nI0410 21:15:48.492635 279 log.go:172] (0xc000998b00) (0xc000528000) Stream removed, broadcasting: 1\nI0410 21:15:48.492791 279 log.go:172] (0xc000998b00) Go away received\nI0410 21:15:48.493087 279 log.go:172] (0xc000998b00) (0xc000528000) Stream removed, broadcasting: 1\nI0410 21:15:48.493249 279 log.go:172] (0xc000998b00) (0xc000687b80) Stream removed, broadcasting: 3\nI0410 21:15:48.493281 279 log.go:172] (0xc000998b00) (0xc000528140) Stream removed, broadcasting: 5\n" Apr 10 21:15:48.497: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:15:48.497: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 10 21:15:48.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4000 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:15:48.701: INFO: stderr: "I0410 21:15:48.619310 300 log.go:172] (0xc00092c000) (0xc000832000) Create stream\nI0410 21:15:48.619375 300 log.go:172] (0xc00092c000) (0xc000832000) Stream added, broadcasting: 1\nI0410 21:15:48.622562 300 log.go:172] (0xc00092c000) Reply frame received for 1\nI0410 21:15:48.622598 300 log.go:172] (0xc00092c000) (0xc0008320a0) Create stream\nI0410 21:15:48.622616 300 log.go:172] (0xc00092c000) (0xc0008320a0) Stream added, broadcasting: 3\nI0410 21:15:48.623363 300 log.go:172] (0xc00092c000) Reply frame received for 3\nI0410 21:15:48.623403 300 log.go:172] (0xc00092c000) (0xc000832140) Create stream\nI0410 21:15:48.623425 300 log.go:172] (0xc00092c000) (0xc000832140) Stream added, broadcasting: 5\nI0410 21:15:48.624394 300 log.go:172] (0xc00092c000) Reply frame received for 5\nI0410 21:15:48.692778 300 log.go:172] (0xc00092c000) Data frame received for 5\nI0410 21:15:48.692917 300 log.go:172] (0xc000832140) (5) Data frame handling\nI0410 21:15:48.692959 300 log.go:172] (0xc000832140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0410 21:15:48.693728 300 log.go:172] (0xc00092c000) Data frame received for 3\nI0410 21:15:48.693788 300 log.go:172] (0xc0008320a0) (3) Data frame handling\nI0410 21:15:48.693815 300 log.go:172] (0xc0008320a0) (3) Data frame sent\nI0410 21:15:48.693838 300 log.go:172] (0xc00092c000) Data frame received for 3\nI0410 21:15:48.693857 300 log.go:172] (0xc0008320a0) (3) Data frame handling\nI0410 21:15:48.693943 300 log.go:172] (0xc00092c000) Data frame received for 5\nI0410 21:15:48.693990 300 log.go:172] (0xc000832140) (5) Data frame handling\nI0410 21:15:48.695373 300 log.go:172] (0xc00092c000) Data frame received for 1\nI0410 21:15:48.695405 300 log.go:172] (0xc000832000) (1) Data frame handling\nI0410 21:15:48.695433 300 log.go:172] (0xc000832000) (1) Data frame sent\nI0410 21:15:48.695509 300 log.go:172] (0xc00092c000) (0xc000832000) Stream removed, broadcasting: 1\nI0410 21:15:48.695571 300 log.go:172] (0xc00092c000) Go away received\nI0410 21:15:48.696389 300 log.go:172] (0xc00092c000) (0xc000832000) Stream removed, broadcasting: 1\nI0410 21:15:48.696424 300 log.go:172] (0xc00092c000) (0xc0008320a0) Stream removed, broadcasting: 3\nI0410 21:15:48.696437 300 log.go:172] (0xc00092c000) (0xc000832140) Stream removed, broadcasting: 5\n" Apr 10 21:15:48.701: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:15:48.701: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 10 21:15:48.701: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 10 21:16:18.716: INFO: Deleting all statefulset in ns statefulset-4000 Apr 10 21:16:18.719: INFO: Scaling statefulset ss to 0 Apr 10 21:16:18.727: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:16:18.729: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:16:18.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4000" for this suite. • [SLOW TEST:92.236 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":33,"skipped":583,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:16:18.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 10 21:16:18.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6347' Apr 10 21:16:18.930: INFO: stderr: "" Apr 10 21:16:18.930: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Apr 10 21:16:18.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6347' Apr 10 21:16:29.482: INFO: stderr: "" Apr 10 21:16:29.482: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:16:29.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6347" for this suite. • [SLOW TEST:10.750 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":34,"skipped":600,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:16:29.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0410 21:16:40.441910 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 21:16:40.441: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:16:40.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7308" for this suite. • [SLOW TEST:10.948 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":35,"skipped":617,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:16:40.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:16:56.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1589" for this suite. • [SLOW TEST:16.111 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":36,"skipped":624,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:16:56.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 10 21:16:56.622: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 10 21:16:56.632: INFO: Waiting for terminating namespaces to be deleted... Apr 10 21:16:56.635: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 10 21:16:56.652: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:16:56.652: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 21:16:56.652: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:16:56.652: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 21:16:56.652: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 10 21:16:56.669: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:16:56.669: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 21:16:56.669: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 10 21:16:56.669: INFO: Container kube-bench ready: false, restart count 0 Apr 10 21:16:56.669: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:16:56.669: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 21:16:56.669: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 10 21:16:56.669: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-128091cd-b6e9-44d6-a0c4-4423e7056b5d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-128091cd-b6e9-44d6-a0c4-4423e7056b5d off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-128091cd-b6e9-44d6-a0c4-4423e7056b5d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:22:04.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4497" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.276 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":37,"skipped":639,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:22:04.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1b180ed6-2494-4c5e-9202-314cc14880ce STEP: Creating a pod to test consume secrets Apr 10 21:22:04.900: INFO: Waiting up to 5m0s for pod "pod-secrets-24bb0b75-8f73-4e93-9b2a-07faf3618003" in namespace "secrets-5437" to be "success or failure" Apr 10 21:22:04.903: INFO: Pod "pod-secrets-24bb0b75-8f73-4e93-9b2a-07faf3618003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.967917ms Apr 10 21:22:06.906: INFO: Pod "pod-secrets-24bb0b75-8f73-4e93-9b2a-07faf3618003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006250629s Apr 10 21:22:08.910: INFO: Pod "pod-secrets-24bb0b75-8f73-4e93-9b2a-07faf3618003": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010729603s STEP: Saw pod success Apr 10 21:22:08.910: INFO: Pod "pod-secrets-24bb0b75-8f73-4e93-9b2a-07faf3618003" satisfied condition "success or failure" Apr 10 21:22:08.914: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-24bb0b75-8f73-4e93-9b2a-07faf3618003 container secret-volume-test: STEP: delete the pod Apr 10 21:22:08.962: INFO: Waiting for pod pod-secrets-24bb0b75-8f73-4e93-9b2a-07faf3618003 to disappear Apr 10 21:22:08.966: INFO: Pod pod-secrets-24bb0b75-8f73-4e93-9b2a-07faf3618003 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:22:08.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5437" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":642,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:22:08.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-cdc9969e-80e8-425f-b5e6-a6e15609c982 Apr 10 21:22:09.032: INFO: Pod name my-hostname-basic-cdc9969e-80e8-425f-b5e6-a6e15609c982: Found 0 pods out of 1 Apr 10 21:22:14.045: INFO: Pod name my-hostname-basic-cdc9969e-80e8-425f-b5e6-a6e15609c982: Found 1 pods out of 1 Apr 10 21:22:14.045: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-cdc9969e-80e8-425f-b5e6-a6e15609c982" are running Apr 10 21:22:14.065: INFO: Pod "my-hostname-basic-cdc9969e-80e8-425f-b5e6-a6e15609c982-jz6hp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 21:22:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 21:22:12 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 21:22:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 21:22:09 +0000 UTC Reason: Message:}]) Apr 10 21:22:14.065: INFO: Trying to dial the pod Apr 10 21:22:19.077: INFO: Controller my-hostname-basic-cdc9969e-80e8-425f-b5e6-a6e15609c982: Got expected result from replica 1 [my-hostname-basic-cdc9969e-80e8-425f-b5e6-a6e15609c982-jz6hp]: "my-hostname-basic-cdc9969e-80e8-425f-b5e6-a6e15609c982-jz6hp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:22:19.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8260" for this suite. • [SLOW TEST:10.110 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":39,"skipped":681,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:22:19.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 10 21:22:19.120: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:22:25.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1023" for this suite. • [SLOW TEST:5.982 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":40,"skipped":690,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:22:25.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-xmbcv in namespace proxy-5895 I0410 21:22:25.253860 7 runners.go:189] Created replication controller with name: proxy-service-xmbcv, namespace: proxy-5895, replica count: 1 I0410 21:22:26.304347 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 21:22:27.304535 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 21:22:28.304786 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0410 21:22:29.304998 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0410 21:22:30.305364 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0410 21:22:31.305599 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0410 21:22:32.305789 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0410 21:22:33.305982 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0410 21:22:34.306171 7 runners.go:189] proxy-service-xmbcv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 10 21:22:34.309: INFO: setup took 9.19144579s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 10 21:22:34.313: INFO: (0) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 3.845002ms) Apr 10 21:22:34.313: INFO: (0) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.163445ms) Apr 10 21:22:34.315: INFO: (0) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 5.466513ms) Apr 10 21:22:34.316: INFO: (0) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 6.725082ms) Apr 10 21:22:34.316: INFO: (0) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 6.771137ms) Apr 10 21:22:34.316: INFO: (0) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 6.98331ms) Apr 10 21:22:34.316: INFO: (0) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 7.004832ms) Apr 10 21:22:34.316: INFO: (0) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 7.066179ms) Apr 10 21:22:34.316: INFO: (0) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 7.078762ms) Apr 10 21:22:34.317: INFO: (0) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 7.512572ms) Apr 10 21:22:34.319: INFO: (0) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 9.743801ms) Apr 10 21:22:34.324: INFO: (0) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 15.418347ms) Apr 10 21:22:34.324: INFO: (0) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 15.473486ms) Apr 10 21:22:34.325: INFO: (0) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 2.651213ms) Apr 10 21:22:34.329: INFO: (1) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 2.723621ms) Apr 10 21:22:34.331: INFO: (1) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.788855ms) Apr 10 21:22:34.331: INFO: (1) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.799489ms) Apr 10 21:22:34.332: INFO: (1) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 6.191573ms) Apr 10 21:22:34.332: INFO: (1) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 6.125389ms) Apr 10 21:22:34.332: INFO: (1) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 6.171537ms) Apr 10 21:22:34.332: INFO: (1) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 6.352688ms) Apr 10 21:22:34.332: INFO: (1) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 4.279041ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 4.635533ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.712202ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 4.783242ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 5.060057ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 5.090931ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 5.098888ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 5.069763ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 5.160765ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 5.275098ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 5.306603ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 5.293254ms) Apr 10 21:22:34.338: INFO: (2) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 5.322543ms) Apr 10 21:22:34.340: INFO: (3) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 2.043794ms) Apr 10 21:22:34.342: INFO: (3) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 2.51215ms) Apr 10 21:22:34.342: INFO: (3) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 2.874221ms) Apr 10 21:22:34.343: INFO: (3) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 3.213899ms) Apr 10 21:22:34.343: INFO: (3) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 3.429098ms) Apr 10 21:22:34.343: INFO: (3) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 3.641057ms) Apr 10 21:22:34.343: INFO: (3) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 3.839278ms) Apr 10 21:22:34.344: INFO: (3) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 5.491516ms) Apr 10 21:22:34.344: INFO: (3) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 5.271208ms) Apr 10 21:22:34.344: INFO: (3) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 5.330703ms) Apr 10 21:22:34.344: INFO: (3) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 5.740986ms) Apr 10 21:22:34.344: INFO: (3) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test (200; 5.709325ms) Apr 10 21:22:34.344: INFO: (3) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 5.385864ms) Apr 10 21:22:34.345: INFO: (3) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 6.03213ms) Apr 10 21:22:34.345: INFO: (3) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 5.758816ms) Apr 10 21:22:34.348: INFO: (4) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 3.416829ms) Apr 10 21:22:34.348: INFO: (4) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 3.402046ms) Apr 10 21:22:34.348: INFO: (4) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.456979ms) Apr 10 21:22:34.349: INFO: (4) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 3.601806ms) Apr 10 21:22:34.349: INFO: (4) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test (200; 3.762678ms) Apr 10 21:22:34.355: INFO: (5) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.766728ms) Apr 10 21:22:34.355: INFO: (5) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 4.7614ms) Apr 10 21:22:34.356: INFO: (5) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test<... (200; 5.550617ms) Apr 10 21:22:34.356: INFO: (5) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 5.645634ms) Apr 10 21:22:34.358: INFO: (5) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 7.269321ms) Apr 10 21:22:34.358: INFO: (5) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 7.257969ms) Apr 10 21:22:34.358: INFO: (5) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 7.332763ms) Apr 10 21:22:34.358: INFO: (5) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 7.379211ms) Apr 10 21:22:34.358: INFO: (5) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 7.454564ms) Apr 10 21:22:34.358: INFO: (5) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 7.596618ms) Apr 10 21:22:34.359: INFO: (5) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 7.834963ms) Apr 10 21:22:34.366: INFO: (6) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 6.971641ms) Apr 10 21:22:34.366: INFO: (6) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 7.064753ms) Apr 10 21:22:34.366: INFO: (6) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 6.807567ms) Apr 10 21:22:34.366: INFO: (6) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 6.89863ms) Apr 10 21:22:34.366: INFO: (6) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 6.94062ms) Apr 10 21:22:34.366: INFO: (6) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 7.21697ms) Apr 10 21:22:34.367: INFO: (6) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 8.424917ms) Apr 10 21:22:34.367: INFO: (6) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 8.882873ms) Apr 10 21:22:34.368: INFO: (6) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 8.820628ms) Apr 10 21:22:34.368: INFO: (6) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 8.725561ms) Apr 10 21:22:34.368: INFO: (6) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 8.723235ms) Apr 10 21:22:34.368: INFO: (6) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 8.764168ms) Apr 10 21:22:34.368: INFO: (6) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 8.816369ms) Apr 10 21:22:34.368: INFO: (6) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 8.771173ms) Apr 10 21:22:34.368: INFO: (6) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 8.832668ms) Apr 10 21:22:34.368: INFO: (6) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test<... (200; 3.77064ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 3.798394ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 4.005536ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 4.019435ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 4.01203ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 4.074694ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 4.124451ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.143688ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.303948ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.337416ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.446435ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 4.42646ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 4.433548ms) Apr 10 21:22:34.372: INFO: (7) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 4.60268ms) Apr 10 21:22:34.375: INFO: (8) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 2.591564ms) Apr 10 21:22:34.375: INFO: (8) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 2.776007ms) Apr 10 21:22:34.375: INFO: (8) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 3.997936ms) Apr 10 21:22:34.377: INFO: (8) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.354017ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.978181ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 5.186641ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 5.148384ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 5.036088ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 5.080017ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 5.312408ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 5.782159ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 5.655327ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 5.804642ms) Apr 10 21:22:34.378: INFO: (8) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 5.853596ms) Apr 10 21:22:34.379: INFO: (8) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 5.889733ms) Apr 10 21:22:34.382: INFO: (9) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 3.205655ms) Apr 10 21:22:34.382: INFO: (9) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.209295ms) Apr 10 21:22:34.383: INFO: (9) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.140854ms) Apr 10 21:22:34.383: INFO: (9) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.119797ms) Apr 10 21:22:34.383: INFO: (9) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 4.111888ms) Apr 10 21:22:34.383: INFO: (9) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 4.72222ms) Apr 10 21:22:34.384: INFO: (9) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 4.744353ms) Apr 10 21:22:34.384: INFO: (9) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.787153ms) Apr 10 21:22:34.384: INFO: (9) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 4.87704ms) Apr 10 21:22:34.384: INFO: (9) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 4.815409ms) Apr 10 21:22:34.384: INFO: (9) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 4.800473ms) Apr 10 21:22:34.384: INFO: (9) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 4.808351ms) Apr 10 21:22:34.384: INFO: (9) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 5.088667ms) Apr 10 21:22:34.387: INFO: (10) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.294094ms) Apr 10 21:22:34.387: INFO: (10) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 3.588123ms) Apr 10 21:22:34.388: INFO: (10) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 3.722675ms) Apr 10 21:22:34.388: INFO: (10) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.909639ms) Apr 10 21:22:34.388: INFO: (10) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 4.305212ms) Apr 10 21:22:34.388: INFO: (10) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 4.523037ms) Apr 10 21:22:34.388: INFO: (10) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 4.515942ms) Apr 10 21:22:34.388: INFO: (10) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.530984ms) Apr 10 21:22:34.389: INFO: (10) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 4.697288ms) Apr 10 21:22:34.389: INFO: (10) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 5.216481ms) Apr 10 21:22:34.389: INFO: (10) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 5.375874ms) Apr 10 21:22:34.389: INFO: (10) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 5.519251ms) Apr 10 21:22:34.390: INFO: (10) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 5.784881ms) Apr 10 21:22:34.393: INFO: (11) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 3.123223ms) Apr 10 21:22:34.393: INFO: (11) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 3.251758ms) Apr 10 21:22:34.393: INFO: (11) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.315973ms) Apr 10 21:22:34.394: INFO: (11) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 3.772982ms) Apr 10 21:22:34.394: INFO: (11) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 3.716492ms) Apr 10 21:22:34.394: INFO: (11) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.985555ms) Apr 10 21:22:34.394: INFO: (11) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 4.023532ms) Apr 10 21:22:34.394: INFO: (11) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 4.019424ms) Apr 10 21:22:34.394: INFO: (11) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 4.001483ms) Apr 10 21:22:34.394: INFO: (11) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 3.553065ms) Apr 10 21:22:34.399: INFO: (12) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 3.581787ms) Apr 10 21:22:34.399: INFO: (12) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 3.655293ms) Apr 10 21:22:34.399: INFO: (12) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 3.767601ms) Apr 10 21:22:34.399: INFO: (12) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test (200; 3.792493ms) Apr 10 21:22:34.399: INFO: (12) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.851361ms) Apr 10 21:22:34.399: INFO: (12) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 3.801289ms) Apr 10 21:22:34.399: INFO: (12) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 3.845947ms) Apr 10 21:22:34.399: INFO: (12) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.89624ms) Apr 10 21:22:34.400: INFO: (12) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 4.691748ms) Apr 10 21:22:34.400: INFO: (12) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 4.682175ms) Apr 10 21:22:34.400: INFO: (12) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 4.651757ms) Apr 10 21:22:34.400: INFO: (12) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 4.704257ms) Apr 10 21:22:34.400: INFO: (12) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 4.741576ms) Apr 10 21:22:34.400: INFO: (12) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 4.810561ms) Apr 10 21:22:34.404: INFO: (13) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.214474ms) Apr 10 21:22:34.405: INFO: (13) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 4.281732ms) Apr 10 21:22:34.405: INFO: (13) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 4.266385ms) Apr 10 21:22:34.405: INFO: (13) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 4.286148ms) Apr 10 21:22:34.405: INFO: (13) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.27759ms) Apr 10 21:22:34.405: INFO: (13) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.882512ms) Apr 10 21:22:34.406: INFO: (13) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 5.339098ms) Apr 10 21:22:34.406: INFO: (13) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 5.580203ms) Apr 10 21:22:34.406: INFO: (13) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 5.658277ms) Apr 10 21:22:34.406: INFO: (13) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test<... (200; 4.551527ms) Apr 10 21:22:34.411: INFO: (14) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.546508ms) Apr 10 21:22:34.411: INFO: (14) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 4.578512ms) Apr 10 21:22:34.411: INFO: (14) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.580714ms) Apr 10 21:22:34.411: INFO: (14) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.730842ms) Apr 10 21:22:34.411: INFO: (14) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 4.856097ms) Apr 10 21:22:34.411: INFO: (14) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.831557ms) Apr 10 21:22:34.411: INFO: (14) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 5.044557ms) Apr 10 21:22:34.414: INFO: (15) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 2.089672ms) Apr 10 21:22:34.414: INFO: (15) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 2.095027ms) Apr 10 21:22:34.416: INFO: (15) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.038238ms) Apr 10 21:22:34.416: INFO: (15) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 4.366044ms) Apr 10 21:22:34.416: INFO: (15) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test (200; 5.029528ms) Apr 10 21:22:34.417: INFO: (15) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 5.171623ms) Apr 10 21:22:34.417: INFO: (15) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 5.104486ms) Apr 10 21:22:34.417: INFO: (15) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 5.108186ms) Apr 10 21:22:34.417: INFO: (15) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 5.20869ms) Apr 10 21:22:34.419: INFO: (16) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 1.921241ms) Apr 10 21:22:34.419: INFO: (16) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 2.171499ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 4.548426ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 4.567081ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 4.664746ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 4.690264ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 4.640947ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.722752ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 4.687798ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.686274ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 4.77335ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 4.820015ms) Apr 10 21:22:34.422: INFO: (16) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: ... (200; 3.244967ms) Apr 10 21:22:34.425: INFO: (17) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 3.31937ms) Apr 10 21:22:34.425: INFO: (17) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 3.496385ms) Apr 10 21:22:34.425: INFO: (17) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd/proxy/: test (200; 3.677389ms) Apr 10 21:22:34.426: INFO: (17) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 3.637702ms) Apr 10 21:22:34.426: INFO: (17) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 3.617946ms) Apr 10 21:22:34.426: INFO: (17) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 3.874222ms) Apr 10 21:22:34.426: INFO: (17) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 3.817817ms) Apr 10 21:22:34.426: INFO: (17) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 3.963048ms) Apr 10 21:22:34.426: INFO: (17) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test<... (200; 2.520846ms) Apr 10 21:22:34.430: INFO: (18) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:460/proxy/: tls baz (200; 3.101193ms) Apr 10 21:22:34.430: INFO: (18) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname2/proxy/: tls qux (200; 3.089791ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 4.158952ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.265964ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 4.408617ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 4.690097ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 4.835953ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:462/proxy/: tls qux (200; 4.876873ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname2/proxy/: bar (200; 4.806691ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test (200; 4.794956ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname1/proxy/: foo (200; 4.908143ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/services/http:proxy-service-xmbcv:portname2/proxy/: bar (200; 4.834962ms) Apr 10 21:22:34.431: INFO: (18) /api/v1/namespaces/proxy-5895/services/https:proxy-service-xmbcv:tlsportname1/proxy/: tls baz (200; 4.853867ms) Apr 10 21:22:34.441: INFO: (19) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:1080/proxy/: test<... (200; 10.051754ms) Apr 10 21:22:34.442: INFO: (19) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:1080/proxy/: ... (200; 10.09358ms) Apr 10 21:22:34.442: INFO: (19) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:160/proxy/: foo (200; 10.275945ms) Apr 10 21:22:34.442: INFO: (19) /api/v1/namespaces/proxy-5895/pods/http:proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 10.336062ms) Apr 10 21:22:34.442: INFO: (19) /api/v1/namespaces/proxy-5895/pods/proxy-service-xmbcv-wjvjd:162/proxy/: bar (200; 10.299084ms) Apr 10 21:22:34.442: INFO: (19) /api/v1/namespaces/proxy-5895/services/proxy-service-xmbcv:portname1/proxy/: foo (200; 10.351499ms) Apr 10 21:22:34.442: INFO: (19) /api/v1/namespaces/proxy-5895/pods/https:proxy-service-xmbcv-wjvjd:443/proxy/: test (200; 11.129624ms) STEP: deleting ReplicationController proxy-service-xmbcv in namespace proxy-5895, will wait for the garbage collector to delete the pods Apr 10 21:22:34.501: INFO: Deleting ReplicationController proxy-service-xmbcv took: 6.856688ms Apr 10 21:22:34.801: INFO: Terminating ReplicationController proxy-service-xmbcv pods took: 300.222524ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:22:39.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5895" for this suite. • [SLOW TEST:14.243 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":41,"skipped":711,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:22:39.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 10 21:22:39.399: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 10 21:22:48.464: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:22:48.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7754" for this suite. • [SLOW TEST:9.164 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":719,"failed":0} SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:22:48.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:22:48.520: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9221 I0410 21:22:48.536575 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9221, replica count: 1 I0410 21:22:49.586949 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 21:22:50.587186 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 21:22:51.587456 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 10 21:22:51.719: INFO: Created: latency-svc-dlwpf Apr 10 21:22:51.724: INFO: Got endpoints: latency-svc-dlwpf [37.132482ms] Apr 10 21:22:51.755: INFO: Created: latency-svc-jqmkj Apr 10 21:22:51.789: INFO: Got endpoints: latency-svc-jqmkj [64.275753ms] Apr 10 21:22:51.799: INFO: Created: latency-svc-k6xj8 Apr 10 21:22:51.814: INFO: Got endpoints: latency-svc-k6xj8 [89.294944ms] Apr 10 21:22:51.847: INFO: Created: latency-svc-b6sq6 Apr 10 21:22:51.884: INFO: Got endpoints: latency-svc-b6sq6 [159.406805ms] Apr 10 21:22:51.937: INFO: Created: latency-svc-v6g6s Apr 10 21:22:51.966: INFO: Got endpoints: latency-svc-v6g6s [240.741096ms] Apr 10 21:22:51.966: INFO: Created: latency-svc-wvwd7 Apr 10 21:22:51.978: INFO: Got endpoints: latency-svc-wvwd7 [252.914038ms] Apr 10 21:22:52.001: INFO: Created: latency-svc-frwxq Apr 10 21:22:52.014: INFO: Got endpoints: latency-svc-frwxq [288.646666ms] Apr 10 21:22:52.092: INFO: Created: latency-svc-rcv8x Apr 10 21:22:52.097: INFO: Got endpoints: latency-svc-rcv8x [371.756121ms] Apr 10 21:22:52.163: INFO: Created: latency-svc-zkvvb Apr 10 21:22:52.182: INFO: Got endpoints: latency-svc-zkvvb [457.084544ms] Apr 10 21:22:52.242: INFO: Created: latency-svc-g7z6r Apr 10 21:22:52.248: INFO: Got endpoints: latency-svc-g7z6r [522.732824ms] Apr 10 21:22:52.265: INFO: Created: latency-svc-qjg6m Apr 10 21:22:52.308: INFO: Got endpoints: latency-svc-qjg6m [583.37946ms] Apr 10 21:22:52.327: INFO: Created: latency-svc-rh575 Apr 10 21:22:52.385: INFO: Got endpoints: latency-svc-rh575 [659.841821ms] Apr 10 21:22:52.406: INFO: Created: latency-svc-sswwr Apr 10 21:22:52.428: INFO: Got endpoints: latency-svc-sswwr [703.132352ms] Apr 10 21:22:52.469: INFO: Created: latency-svc-wrcxp Apr 10 21:22:52.481: INFO: Got endpoints: latency-svc-wrcxp [756.026772ms] Apr 10 21:22:52.535: INFO: Created: latency-svc-947qm Apr 10 21:22:52.555: INFO: Got endpoints: latency-svc-947qm [830.301069ms] Apr 10 21:22:52.556: INFO: Created: latency-svc-p9b8s Apr 10 21:22:52.567: INFO: Got endpoints: latency-svc-p9b8s [841.949998ms] Apr 10 21:22:52.595: INFO: Created: latency-svc-6v924 Apr 10 21:22:52.610: INFO: Got endpoints: latency-svc-6v924 [820.631605ms] Apr 10 21:22:52.632: INFO: Created: latency-svc-b4fnh Apr 10 21:22:52.681: INFO: Got endpoints: latency-svc-b4fnh [867.130853ms] Apr 10 21:22:52.681: INFO: Created: latency-svc-q65p6 Apr 10 21:22:52.694: INFO: Got endpoints: latency-svc-q65p6 [810.095825ms] Apr 10 21:22:52.711: INFO: Created: latency-svc-88kbc Apr 10 21:22:52.725: INFO: Got endpoints: latency-svc-88kbc [759.008985ms] Apr 10 21:22:52.741: INFO: Created: latency-svc-w5htf Apr 10 21:22:52.755: INFO: Got endpoints: latency-svc-w5htf [776.763621ms] Apr 10 21:22:52.828: INFO: Created: latency-svc-pmbgs Apr 10 21:22:52.832: INFO: Got endpoints: latency-svc-pmbgs [818.050666ms] Apr 10 21:22:52.885: INFO: Created: latency-svc-nwg9m Apr 10 21:22:52.899: INFO: Got endpoints: latency-svc-nwg9m [802.561927ms] Apr 10 21:22:52.921: INFO: Created: latency-svc-s4jfr Apr 10 21:22:52.960: INFO: Got endpoints: latency-svc-s4jfr [777.615619ms] Apr 10 21:22:52.975: INFO: Created: latency-svc-rqq8x Apr 10 21:22:52.990: INFO: Got endpoints: latency-svc-rqq8x [742.005281ms] Apr 10 21:22:53.009: INFO: Created: latency-svc-z779j Apr 10 21:22:53.020: INFO: Got endpoints: latency-svc-z779j [711.640709ms] Apr 10 21:22:53.039: INFO: Created: latency-svc-sw66w Apr 10 21:22:53.050: INFO: Got endpoints: latency-svc-sw66w [665.40362ms] Apr 10 21:22:53.092: INFO: Created: latency-svc-6tccd Apr 10 21:22:53.119: INFO: Got endpoints: latency-svc-6tccd [691.517118ms] Apr 10 21:22:53.122: INFO: Created: latency-svc-84fvq Apr 10 21:22:53.149: INFO: Got endpoints: latency-svc-84fvq [668.064755ms] Apr 10 21:22:53.177: INFO: Created: latency-svc-mtb4j Apr 10 21:22:53.190: INFO: Got endpoints: latency-svc-mtb4j [634.012307ms] Apr 10 21:22:53.219: INFO: Created: latency-svc-nxpg9 Apr 10 21:22:53.232: INFO: Got endpoints: latency-svc-nxpg9 [664.299062ms] Apr 10 21:22:53.269: INFO: Created: latency-svc-b9fhv Apr 10 21:22:53.292: INFO: Got endpoints: latency-svc-b9fhv [681.854346ms] Apr 10 21:22:53.317: INFO: Created: latency-svc-hk79n Apr 10 21:22:53.355: INFO: Got endpoints: latency-svc-hk79n [674.41635ms] Apr 10 21:22:53.377: INFO: Created: latency-svc-95qz8 Apr 10 21:22:53.388: INFO: Got endpoints: latency-svc-95qz8 [694.13446ms] Apr 10 21:22:53.423: INFO: Created: latency-svc-zbzq5 Apr 10 21:22:53.453: INFO: Got endpoints: latency-svc-zbzq5 [728.127138ms] Apr 10 21:22:53.505: INFO: Created: latency-svc-sd8qb Apr 10 21:22:53.510: INFO: Got endpoints: latency-svc-sd8qb [755.00529ms] Apr 10 21:22:53.539: INFO: Created: latency-svc-96r74 Apr 10 21:22:53.551: INFO: Got endpoints: latency-svc-96r74 [719.516048ms] Apr 10 21:22:53.575: INFO: Created: latency-svc-g7d85 Apr 10 21:22:53.589: INFO: Got endpoints: latency-svc-g7d85 [689.233301ms] Apr 10 21:22:53.637: INFO: Created: latency-svc-hpwlc Apr 10 21:22:53.646: INFO: Got endpoints: latency-svc-hpwlc [685.726571ms] Apr 10 21:22:53.675: INFO: Created: latency-svc-vwsgj Apr 10 21:22:53.684: INFO: Got endpoints: latency-svc-vwsgj [693.787203ms] Apr 10 21:22:53.706: INFO: Created: latency-svc-smsbk Apr 10 21:22:53.714: INFO: Got endpoints: latency-svc-smsbk [694.201433ms] Apr 10 21:22:53.793: INFO: Created: latency-svc-lbms2 Apr 10 21:22:53.803: INFO: Got endpoints: latency-svc-lbms2 [752.469501ms] Apr 10 21:22:53.837: INFO: Created: latency-svc-nqzc2 Apr 10 21:22:53.873: INFO: Got endpoints: latency-svc-nqzc2 [753.435721ms] Apr 10 21:22:53.945: INFO: Created: latency-svc-l56hx Apr 10 21:22:53.950: INFO: Got endpoints: latency-svc-l56hx [800.393302ms] Apr 10 21:22:53.971: INFO: Created: latency-svc-hrwzr Apr 10 21:22:53.986: INFO: Got endpoints: latency-svc-hrwzr [796.01248ms] Apr 10 21:22:54.017: INFO: Created: latency-svc-kplnl Apr 10 21:22:54.034: INFO: Got endpoints: latency-svc-kplnl [802.335092ms] Apr 10 21:22:54.077: INFO: Created: latency-svc-whz2j Apr 10 21:22:54.094: INFO: Got endpoints: latency-svc-whz2j [802.009678ms] Apr 10 21:22:54.113: INFO: Created: latency-svc-scp97 Apr 10 21:22:54.124: INFO: Got endpoints: latency-svc-scp97 [768.311969ms] Apr 10 21:22:54.145: INFO: Created: latency-svc-xr6gr Apr 10 21:22:54.160: INFO: Got endpoints: latency-svc-xr6gr [772.128333ms] Apr 10 21:22:54.223: INFO: Created: latency-svc-ws48b Apr 10 21:22:54.241: INFO: Got endpoints: latency-svc-ws48b [788.049096ms] Apr 10 21:22:54.269: INFO: Created: latency-svc-xfqkj Apr 10 21:22:54.293: INFO: Got endpoints: latency-svc-xfqkj [783.146429ms] Apr 10 21:22:54.317: INFO: Created: latency-svc-d7gtd Apr 10 21:22:54.381: INFO: Got endpoints: latency-svc-d7gtd [829.523646ms] Apr 10 21:22:54.381: INFO: Created: latency-svc-vbgxh Apr 10 21:22:54.389: INFO: Got endpoints: latency-svc-vbgxh [800.285229ms] Apr 10 21:22:54.409: INFO: Created: latency-svc-dm7h9 Apr 10 21:22:54.432: INFO: Got endpoints: latency-svc-dm7h9 [785.765845ms] Apr 10 21:22:54.467: INFO: Created: latency-svc-k5bhn Apr 10 21:22:54.523: INFO: Got endpoints: latency-svc-k5bhn [838.794616ms] Apr 10 21:22:54.528: INFO: Created: latency-svc-qbmsh Apr 10 21:22:54.540: INFO: Got endpoints: latency-svc-qbmsh [825.751915ms] Apr 10 21:22:54.559: INFO: Created: latency-svc-hvm44 Apr 10 21:22:54.571: INFO: Got endpoints: latency-svc-hvm44 [767.843331ms] Apr 10 21:22:54.608: INFO: Created: latency-svc-jwpfr Apr 10 21:22:54.655: INFO: Got endpoints: latency-svc-jwpfr [781.853626ms] Apr 10 21:22:54.683: INFO: Created: latency-svc-7df5p Apr 10 21:22:54.725: INFO: Got endpoints: latency-svc-7df5p [775.428459ms] Apr 10 21:22:54.811: INFO: Created: latency-svc-r6pf5 Apr 10 21:22:54.829: INFO: Got endpoints: latency-svc-r6pf5 [843.857217ms] Apr 10 21:22:54.853: INFO: Created: latency-svc-bmhql Apr 10 21:22:54.865: INFO: Got endpoints: latency-svc-bmhql [831.174397ms] Apr 10 21:22:54.905: INFO: Created: latency-svc-xrw72 Apr 10 21:22:54.942: INFO: Got endpoints: latency-svc-xrw72 [847.972131ms] Apr 10 21:22:54.952: INFO: Created: latency-svc-2j8qt Apr 10 21:22:54.968: INFO: Got endpoints: latency-svc-2j8qt [843.98367ms] Apr 10 21:22:55.009: INFO: Created: latency-svc-pjvbp Apr 10 21:22:55.022: INFO: Got endpoints: latency-svc-pjvbp [861.55469ms] Apr 10 21:22:55.086: INFO: Created: latency-svc-qn8gf Apr 10 21:22:55.089: INFO: Got endpoints: latency-svc-qn8gf [848.064269ms] Apr 10 21:22:55.115: INFO: Created: latency-svc-cx7hs Apr 10 21:22:55.131: INFO: Got endpoints: latency-svc-cx7hs [837.968853ms] Apr 10 21:22:55.150: INFO: Created: latency-svc-wqcnj Apr 10 21:22:55.161: INFO: Got endpoints: latency-svc-wqcnj [780.444916ms] Apr 10 21:22:55.183: INFO: Created: latency-svc-c8rw7 Apr 10 21:22:55.254: INFO: Got endpoints: latency-svc-c8rw7 [864.838997ms] Apr 10 21:22:55.256: INFO: Created: latency-svc-j4kt4 Apr 10 21:22:55.316: INFO: Got endpoints: latency-svc-j4kt4 [883.959381ms] Apr 10 21:22:55.403: INFO: Created: latency-svc-wznc5 Apr 10 21:22:55.407: INFO: Got endpoints: latency-svc-wznc5 [884.362507ms] Apr 10 21:22:55.445: INFO: Created: latency-svc-qzvst Apr 10 21:22:55.460: INFO: Got endpoints: latency-svc-qzvst [919.50835ms] Apr 10 21:22:55.487: INFO: Created: latency-svc-6m6j4 Apr 10 21:22:55.502: INFO: Got endpoints: latency-svc-6m6j4 [930.748236ms] Apr 10 21:22:55.585: INFO: Created: latency-svc-9vsqp Apr 10 21:22:55.600: INFO: Got endpoints: latency-svc-9vsqp [944.78043ms] Apr 10 21:22:55.615: INFO: Created: latency-svc-88bp6 Apr 10 21:22:55.629: INFO: Got endpoints: latency-svc-88bp6 [903.384473ms] Apr 10 21:22:55.715: INFO: Created: latency-svc-qk8qs Apr 10 21:22:55.718: INFO: Got endpoints: latency-svc-qk8qs [888.340249ms] Apr 10 21:22:55.741: INFO: Created: latency-svc-xbzvp Apr 10 21:22:55.755: INFO: Got endpoints: latency-svc-xbzvp [889.534325ms] Apr 10 21:22:55.771: INFO: Created: latency-svc-65x5z Apr 10 21:22:55.851: INFO: Got endpoints: latency-svc-65x5z [908.864508ms] Apr 10 21:22:56.022: INFO: Created: latency-svc-frgjf Apr 10 21:22:56.043: INFO: Got endpoints: latency-svc-frgjf [1.074923208s] Apr 10 21:22:56.065: INFO: Created: latency-svc-sp7sq Apr 10 21:22:56.085: INFO: Got endpoints: latency-svc-sp7sq [1.063021648s] Apr 10 21:22:56.111: INFO: Created: latency-svc-56vc6 Apr 10 21:22:56.194: INFO: Got endpoints: latency-svc-56vc6 [1.104342205s] Apr 10 21:22:56.221: INFO: Created: latency-svc-4znxv Apr 10 21:22:56.235: INFO: Got endpoints: latency-svc-4znxv [1.104092472s] Apr 10 21:22:56.257: INFO: Created: latency-svc-fqckw Apr 10 21:22:56.265: INFO: Got endpoints: latency-svc-fqckw [1.104192847s] Apr 10 21:22:56.285: INFO: Created: latency-svc-t75nb Apr 10 21:22:56.328: INFO: Got endpoints: latency-svc-t75nb [1.073471195s] Apr 10 21:22:56.345: INFO: Created: latency-svc-l88dt Apr 10 21:22:56.356: INFO: Got endpoints: latency-svc-l88dt [1.04054842s] Apr 10 21:22:56.387: INFO: Created: latency-svc-zxznz Apr 10 21:22:56.399: INFO: Got endpoints: latency-svc-zxznz [991.340483ms] Apr 10 21:22:56.487: INFO: Created: latency-svc-tkhqr Apr 10 21:22:56.494: INFO: Got endpoints: latency-svc-tkhqr [1.033929696s] Apr 10 21:22:56.513: INFO: Created: latency-svc-69hqm Apr 10 21:22:56.526: INFO: Got endpoints: latency-svc-69hqm [1.023783517s] Apr 10 21:22:56.549: INFO: Created: latency-svc-b42q4 Apr 10 21:22:56.562: INFO: Got endpoints: latency-svc-b42q4 [962.140996ms] Apr 10 21:22:56.581: INFO: Created: latency-svc-hlxsp Apr 10 21:22:56.625: INFO: Got endpoints: latency-svc-hlxsp [995.832118ms] Apr 10 21:22:56.647: INFO: Created: latency-svc-nkrss Apr 10 21:22:56.665: INFO: Got endpoints: latency-svc-nkrss [947.21601ms] Apr 10 21:22:56.692: INFO: Created: latency-svc-bqnkx Apr 10 21:22:56.707: INFO: Got endpoints: latency-svc-bqnkx [951.727019ms] Apr 10 21:22:56.723: INFO: Created: latency-svc-kq8c5 Apr 10 21:22:56.756: INFO: Got endpoints: latency-svc-kq8c5 [905.613964ms] Apr 10 21:22:56.765: INFO: Created: latency-svc-g6g4k Apr 10 21:22:56.779: INFO: Got endpoints: latency-svc-g6g4k [736.345206ms] Apr 10 21:22:56.797: INFO: Created: latency-svc-5vzlc Apr 10 21:22:56.810: INFO: Got endpoints: latency-svc-5vzlc [724.584364ms] Apr 10 21:22:56.827: INFO: Created: latency-svc-tlv87 Apr 10 21:22:56.839: INFO: Got endpoints: latency-svc-tlv87 [645.782593ms] Apr 10 21:22:56.888: INFO: Created: latency-svc-dszjx Apr 10 21:22:56.891: INFO: Got endpoints: latency-svc-dszjx [655.473521ms] Apr 10 21:22:56.921: INFO: Created: latency-svc-6qksk Apr 10 21:22:56.950: INFO: Got endpoints: latency-svc-6qksk [684.690336ms] Apr 10 21:22:56.981: INFO: Created: latency-svc-rrxjl Apr 10 21:22:57.020: INFO: Got endpoints: latency-svc-rrxjl [692.32071ms] Apr 10 21:22:57.031: INFO: Created: latency-svc-q7c9v Apr 10 21:22:57.045: INFO: Got endpoints: latency-svc-q7c9v [688.935972ms] Apr 10 21:22:57.061: INFO: Created: latency-svc-n767f Apr 10 21:22:57.087: INFO: Got endpoints: latency-svc-n767f [688.426807ms] Apr 10 21:22:57.106: INFO: Created: latency-svc-64k4j Apr 10 21:22:57.117: INFO: Got endpoints: latency-svc-64k4j [623.669691ms] Apr 10 21:22:57.167: INFO: Created: latency-svc-x9vvv Apr 10 21:22:57.178: INFO: Got endpoints: latency-svc-x9vvv [652.225991ms] Apr 10 21:22:57.197: INFO: Created: latency-svc-kttjk Apr 10 21:22:57.208: INFO: Got endpoints: latency-svc-kttjk [646.113982ms] Apr 10 21:22:57.229: INFO: Created: latency-svc-8kmsg Apr 10 21:22:57.356: INFO: Got endpoints: latency-svc-8kmsg [731.209909ms] Apr 10 21:22:57.358: INFO: Created: latency-svc-pm4hz Apr 10 21:22:57.364: INFO: Got endpoints: latency-svc-pm4hz [698.928183ms] Apr 10 21:22:57.427: INFO: Created: latency-svc-fx9zg Apr 10 21:22:57.442: INFO: Got endpoints: latency-svc-fx9zg [735.696636ms] Apr 10 21:22:57.493: INFO: Created: latency-svc-5bgh7 Apr 10 21:22:57.495: INFO: Got endpoints: latency-svc-5bgh7 [738.686239ms] Apr 10 21:22:57.514: INFO: Created: latency-svc-xvztc Apr 10 21:22:57.527: INFO: Got endpoints: latency-svc-xvztc [747.619227ms] Apr 10 21:22:57.551: INFO: Created: latency-svc-glrgz Apr 10 21:22:57.563: INFO: Got endpoints: latency-svc-glrgz [753.576432ms] Apr 10 21:22:57.580: INFO: Created: latency-svc-9qwxj Apr 10 21:22:57.630: INFO: Got endpoints: latency-svc-9qwxj [790.833639ms] Apr 10 21:22:57.643: INFO: Created: latency-svc-hgbcm Apr 10 21:22:57.654: INFO: Got endpoints: latency-svc-hgbcm [763.078733ms] Apr 10 21:22:57.679: INFO: Created: latency-svc-8vwmk Apr 10 21:22:57.690: INFO: Got endpoints: latency-svc-8vwmk [739.980455ms] Apr 10 21:22:57.709: INFO: Created: latency-svc-vcbnq Apr 10 21:22:57.720: INFO: Got endpoints: latency-svc-vcbnq [700.211266ms] Apr 10 21:22:57.762: INFO: Created: latency-svc-9k287 Apr 10 21:22:57.791: INFO: Got endpoints: latency-svc-9k287 [745.319992ms] Apr 10 21:22:57.817: INFO: Created: latency-svc-wx6pv Apr 10 21:22:57.829: INFO: Got endpoints: latency-svc-wx6pv [742.391428ms] Apr 10 21:22:57.847: INFO: Created: latency-svc-sgzk5 Apr 10 21:22:57.860: INFO: Got endpoints: latency-svc-sgzk5 [742.359048ms] Apr 10 21:22:57.925: INFO: Created: latency-svc-8xzcd Apr 10 21:22:57.928: INFO: Got endpoints: latency-svc-8xzcd [750.473085ms] Apr 10 21:22:57.953: INFO: Created: latency-svc-2wq27 Apr 10 21:22:57.968: INFO: Got endpoints: latency-svc-2wq27 [760.438348ms] Apr 10 21:22:57.995: INFO: Created: latency-svc-9w6qg Apr 10 21:22:58.011: INFO: Got endpoints: latency-svc-9w6qg [654.598092ms] Apr 10 21:22:58.081: INFO: Created: latency-svc-5ddfp Apr 10 21:22:58.084: INFO: Got endpoints: latency-svc-5ddfp [719.801813ms] Apr 10 21:22:58.121: INFO: Created: latency-svc-6dh8k Apr 10 21:22:58.157: INFO: Got endpoints: latency-svc-6dh8k [715.102393ms] Apr 10 21:22:58.224: INFO: Created: latency-svc-5p7v6 Apr 10 21:22:58.226: INFO: Got endpoints: latency-svc-5p7v6 [731.058769ms] Apr 10 21:22:58.273: INFO: Created: latency-svc-bptqs Apr 10 21:22:58.281: INFO: Got endpoints: latency-svc-bptqs [754.214791ms] Apr 10 21:22:58.300: INFO: Created: latency-svc-hmqfn Apr 10 21:22:58.312: INFO: Got endpoints: latency-svc-hmqfn [748.055493ms] Apr 10 21:22:58.355: INFO: Created: latency-svc-lcxxg Apr 10 21:22:58.375: INFO: Created: latency-svc-vh646 Apr 10 21:22:58.375: INFO: Got endpoints: latency-svc-lcxxg [744.573925ms] Apr 10 21:22:58.390: INFO: Got endpoints: latency-svc-vh646 [736.436237ms] Apr 10 21:22:58.411: INFO: Created: latency-svc-kdbfk Apr 10 21:22:58.421: INFO: Got endpoints: latency-svc-kdbfk [730.77257ms] Apr 10 21:22:58.441: INFO: Created: latency-svc-57kxn Apr 10 21:22:58.505: INFO: Got endpoints: latency-svc-57kxn [784.408034ms] Apr 10 21:22:58.517: INFO: Created: latency-svc-79fzw Apr 10 21:22:58.529: INFO: Got endpoints: latency-svc-79fzw [738.379638ms] Apr 10 21:22:58.649: INFO: Created: latency-svc-xtfwv Apr 10 21:22:58.653: INFO: Got endpoints: latency-svc-xtfwv [823.367556ms] Apr 10 21:22:58.691: INFO: Created: latency-svc-w99dj Apr 10 21:22:58.703: INFO: Got endpoints: latency-svc-w99dj [843.582886ms] Apr 10 21:22:58.727: INFO: Created: latency-svc-9vj7c Apr 10 21:22:58.740: INFO: Got endpoints: latency-svc-9vj7c [811.260771ms] Apr 10 21:22:58.787: INFO: Created: latency-svc-rwvr7 Apr 10 21:22:58.789: INFO: Got endpoints: latency-svc-rwvr7 [820.887684ms] Apr 10 21:22:58.819: INFO: Created: latency-svc-8csdg Apr 10 21:22:58.831: INFO: Got endpoints: latency-svc-8csdg [819.905575ms] Apr 10 21:22:58.855: INFO: Created: latency-svc-vgzn6 Apr 10 21:22:58.877: INFO: Got endpoints: latency-svc-vgzn6 [792.964226ms] Apr 10 21:22:58.936: INFO: Created: latency-svc-vxts7 Apr 10 21:22:58.939: INFO: Got endpoints: latency-svc-vxts7 [781.776943ms] Apr 10 21:22:58.963: INFO: Created: latency-svc-m57tg Apr 10 21:22:58.975: INFO: Got endpoints: latency-svc-m57tg [749.162748ms] Apr 10 21:22:58.993: INFO: Created: latency-svc-tsgkw Apr 10 21:22:59.006: INFO: Got endpoints: latency-svc-tsgkw [724.43736ms] Apr 10 21:22:59.023: INFO: Created: latency-svc-9ncs2 Apr 10 21:22:59.074: INFO: Got endpoints: latency-svc-9ncs2 [762.216295ms] Apr 10 21:22:59.077: INFO: Created: latency-svc-sh6nt Apr 10 21:22:59.090: INFO: Got endpoints: latency-svc-sh6nt [715.105994ms] Apr 10 21:22:59.118: INFO: Created: latency-svc-sqjb2 Apr 10 21:22:59.133: INFO: Got endpoints: latency-svc-sqjb2 [742.932242ms] Apr 10 21:22:59.153: INFO: Created: latency-svc-vdnt5 Apr 10 21:22:59.169: INFO: Got endpoints: latency-svc-vdnt5 [748.206162ms] Apr 10 21:22:59.224: INFO: Created: latency-svc-gb5zg Apr 10 21:22:59.230: INFO: Got endpoints: latency-svc-gb5zg [725.120339ms] Apr 10 21:22:59.280: INFO: Created: latency-svc-gxrgq Apr 10 21:22:59.296: INFO: Got endpoints: latency-svc-gxrgq [766.728797ms] Apr 10 21:22:59.321: INFO: Created: latency-svc-mjspw Apr 10 21:22:59.367: INFO: Got endpoints: latency-svc-mjspw [713.758744ms] Apr 10 21:22:59.368: INFO: Created: latency-svc-6pht6 Apr 10 21:22:59.407: INFO: Got endpoints: latency-svc-6pht6 [703.118196ms] Apr 10 21:22:59.443: INFO: Created: latency-svc-dxmct Apr 10 21:22:59.452: INFO: Got endpoints: latency-svc-dxmct [712.231426ms] Apr 10 21:22:59.505: INFO: Created: latency-svc-krs7w Apr 10 21:22:59.509: INFO: Got endpoints: latency-svc-krs7w [720.071893ms] Apr 10 21:22:59.531: INFO: Created: latency-svc-rqjt4 Apr 10 21:22:59.543: INFO: Got endpoints: latency-svc-rqjt4 [712.600081ms] Apr 10 21:22:59.560: INFO: Created: latency-svc-kwkqk Apr 10 21:22:59.574: INFO: Got endpoints: latency-svc-kwkqk [696.507525ms] Apr 10 21:22:59.593: INFO: Created: latency-svc-l5scw Apr 10 21:22:59.604: INFO: Got endpoints: latency-svc-l5scw [664.334435ms] Apr 10 21:22:59.649: INFO: Created: latency-svc-t5k9x Apr 10 21:22:59.652: INFO: Got endpoints: latency-svc-t5k9x [676.074961ms] Apr 10 21:22:59.700: INFO: Created: latency-svc-2gnbc Apr 10 21:22:59.730: INFO: Got endpoints: latency-svc-2gnbc [724.64859ms] Apr 10 21:22:59.747: INFO: Created: latency-svc-cs9ps Apr 10 21:22:59.792: INFO: Got endpoints: latency-svc-cs9ps [718.397063ms] Apr 10 21:22:59.802: INFO: Created: latency-svc-xfw2r Apr 10 21:22:59.815: INFO: Got endpoints: latency-svc-xfw2r [724.436684ms] Apr 10 21:22:59.834: INFO: Created: latency-svc-hj46m Apr 10 21:22:59.845: INFO: Got endpoints: latency-svc-hj46m [711.657258ms] Apr 10 21:22:59.863: INFO: Created: latency-svc-2m6t2 Apr 10 21:22:59.875: INFO: Got endpoints: latency-svc-2m6t2 [705.854407ms] Apr 10 21:22:59.936: INFO: Created: latency-svc-4jq7b Apr 10 21:22:59.963: INFO: Created: latency-svc-ht6xp Apr 10 21:22:59.963: INFO: Got endpoints: latency-svc-4jq7b [732.894618ms] Apr 10 21:22:59.989: INFO: Got endpoints: latency-svc-ht6xp [693.561624ms] Apr 10 21:23:00.015: INFO: Created: latency-svc-hmr85 Apr 10 21:23:00.026: INFO: Got endpoints: latency-svc-hmr85 [659.60188ms] Apr 10 21:23:00.068: INFO: Created: latency-svc-h68db Apr 10 21:23:00.112: INFO: Got endpoints: latency-svc-h68db [705.795747ms] Apr 10 21:23:00.113: INFO: Created: latency-svc-m88v7 Apr 10 21:23:00.129: INFO: Got endpoints: latency-svc-m88v7 [677.06072ms] Apr 10 21:23:00.205: INFO: Created: latency-svc-rvlmp Apr 10 21:23:00.229: INFO: Got endpoints: latency-svc-rvlmp [719.607933ms] Apr 10 21:23:00.229: INFO: Created: latency-svc-scx77 Apr 10 21:23:00.241: INFO: Got endpoints: latency-svc-scx77 [697.264824ms] Apr 10 21:23:00.260: INFO: Created: latency-svc-r6mwr Apr 10 21:23:00.271: INFO: Got endpoints: latency-svc-r6mwr [697.514498ms] Apr 10 21:23:00.305: INFO: Created: latency-svc-qn6rl Apr 10 21:23:00.337: INFO: Got endpoints: latency-svc-qn6rl [733.676904ms] Apr 10 21:23:00.359: INFO: Created: latency-svc-595ns Apr 10 21:23:00.368: INFO: Got endpoints: latency-svc-595ns [716.101367ms] Apr 10 21:23:00.391: INFO: Created: latency-svc-jh6s5 Apr 10 21:23:00.405: INFO: Got endpoints: latency-svc-jh6s5 [674.498762ms] Apr 10 21:23:00.427: INFO: Created: latency-svc-dcqsj Apr 10 21:23:00.475: INFO: Got endpoints: latency-svc-dcqsj [682.450041ms] Apr 10 21:23:00.500: INFO: Created: latency-svc-6ffx7 Apr 10 21:23:00.539: INFO: Got endpoints: latency-svc-6ffx7 [724.282169ms] Apr 10 21:23:00.631: INFO: Created: latency-svc-b286n Apr 10 21:23:00.655: INFO: Got endpoints: latency-svc-b286n [809.521756ms] Apr 10 21:23:00.657: INFO: Created: latency-svc-sqz94 Apr 10 21:23:00.669: INFO: Got endpoints: latency-svc-sqz94 [794.08161ms] Apr 10 21:23:00.691: INFO: Created: latency-svc-ttct4 Apr 10 21:23:00.706: INFO: Got endpoints: latency-svc-ttct4 [743.145411ms] Apr 10 21:23:00.774: INFO: Created: latency-svc-qvph7 Apr 10 21:23:00.785: INFO: Got endpoints: latency-svc-qvph7 [795.193819ms] Apr 10 21:23:00.803: INFO: Created: latency-svc-b8drq Apr 10 21:23:00.814: INFO: Got endpoints: latency-svc-b8drq [787.724295ms] Apr 10 21:23:00.835: INFO: Created: latency-svc-p8qrq Apr 10 21:23:00.865: INFO: Got endpoints: latency-svc-p8qrq [752.749915ms] Apr 10 21:23:00.918: INFO: Created: latency-svc-xcth7 Apr 10 21:23:00.923: INFO: Got endpoints: latency-svc-xcth7 [793.642285ms] Apr 10 21:23:00.964: INFO: Created: latency-svc-bqkph Apr 10 21:23:00.971: INFO: Got endpoints: latency-svc-bqkph [741.636404ms] Apr 10 21:23:00.995: INFO: Created: latency-svc-5677q Apr 10 21:23:01.007: INFO: Got endpoints: latency-svc-5677q [766.36767ms] Apr 10 21:23:01.056: INFO: Created: latency-svc-hndcw Apr 10 21:23:01.075: INFO: Got endpoints: latency-svc-hndcw [804.176073ms] Apr 10 21:23:01.105: INFO: Created: latency-svc-5fhf6 Apr 10 21:23:01.122: INFO: Got endpoints: latency-svc-5fhf6 [784.386719ms] Apr 10 21:23:01.138: INFO: Created: latency-svc-f4g5c Apr 10 21:23:01.152: INFO: Got endpoints: latency-svc-f4g5c [784.070027ms] Apr 10 21:23:01.182: INFO: Created: latency-svc-mwrrl Apr 10 21:23:01.194: INFO: Got endpoints: latency-svc-mwrrl [789.51402ms] Apr 10 21:23:01.212: INFO: Created: latency-svc-wdmt8 Apr 10 21:23:01.225: INFO: Got endpoints: latency-svc-wdmt8 [749.710425ms] Apr 10 21:23:01.244: INFO: Created: latency-svc-pjbm9 Apr 10 21:23:01.261: INFO: Got endpoints: latency-svc-pjbm9 [722.271148ms] Apr 10 21:23:01.325: INFO: Created: latency-svc-9mhl5 Apr 10 21:23:01.329: INFO: Got endpoints: latency-svc-9mhl5 [673.828131ms] Apr 10 21:23:01.355: INFO: Created: latency-svc-8f69s Apr 10 21:23:01.385: INFO: Got endpoints: latency-svc-8f69s [715.710404ms] Apr 10 21:23:01.422: INFO: Created: latency-svc-bhr5s Apr 10 21:23:01.475: INFO: Got endpoints: latency-svc-bhr5s [768.656176ms] Apr 10 21:23:01.489: INFO: Created: latency-svc-lwjxz Apr 10 21:23:01.502: INFO: Got endpoints: latency-svc-lwjxz [717.56756ms] Apr 10 21:23:01.525: INFO: Created: latency-svc-rpx6b Apr 10 21:23:01.538: INFO: Got endpoints: latency-svc-rpx6b [724.168543ms] Apr 10 21:23:01.553: INFO: Created: latency-svc-rqtbb Apr 10 21:23:01.569: INFO: Got endpoints: latency-svc-rqtbb [703.860972ms] Apr 10 21:23:01.619: INFO: Created: latency-svc-hmxpw Apr 10 21:23:01.643: INFO: Got endpoints: latency-svc-hmxpw [720.003536ms] Apr 10 21:23:01.643: INFO: Created: latency-svc-rv2h7 Apr 10 21:23:01.653: INFO: Got endpoints: latency-svc-rv2h7 [682.326019ms] Apr 10 21:23:01.681: INFO: Created: latency-svc-pqvgt Apr 10 21:23:01.711: INFO: Got endpoints: latency-svc-pqvgt [703.606564ms] Apr 10 21:23:01.768: INFO: Created: latency-svc-8wjtb Apr 10 21:23:01.774: INFO: Got endpoints: latency-svc-8wjtb [698.710565ms] Apr 10 21:23:01.805: INFO: Created: latency-svc-2qtt6 Apr 10 21:23:01.819: INFO: Got endpoints: latency-svc-2qtt6 [697.219303ms] Apr 10 21:23:01.835: INFO: Created: latency-svc-b5sfr Apr 10 21:23:01.846: INFO: Got endpoints: latency-svc-b5sfr [694.61945ms] Apr 10 21:23:01.867: INFO: Created: latency-svc-zz79g Apr 10 21:23:01.906: INFO: Got endpoints: latency-svc-zz79g [711.538621ms] Apr 10 21:23:01.923: INFO: Created: latency-svc-8m7cp Apr 10 21:23:01.939: INFO: Got endpoints: latency-svc-8m7cp [714.152536ms] Apr 10 21:23:01.985: INFO: Created: latency-svc-cfwtk Apr 10 21:23:01.997: INFO: Got endpoints: latency-svc-cfwtk [736.084155ms] Apr 10 21:23:02.038: INFO: Created: latency-svc-qmt2v Apr 10 21:23:02.041: INFO: Got endpoints: latency-svc-qmt2v [712.819823ms] Apr 10 21:23:02.041: INFO: Latencies: [64.275753ms 89.294944ms 159.406805ms 240.741096ms 252.914038ms 288.646666ms 371.756121ms 457.084544ms 522.732824ms 583.37946ms 623.669691ms 634.012307ms 645.782593ms 646.113982ms 652.225991ms 654.598092ms 655.473521ms 659.60188ms 659.841821ms 664.299062ms 664.334435ms 665.40362ms 668.064755ms 673.828131ms 674.41635ms 674.498762ms 676.074961ms 677.06072ms 681.854346ms 682.326019ms 682.450041ms 684.690336ms 685.726571ms 688.426807ms 688.935972ms 689.233301ms 691.517118ms 692.32071ms 693.561624ms 693.787203ms 694.13446ms 694.201433ms 694.61945ms 696.507525ms 697.219303ms 697.264824ms 697.514498ms 698.710565ms 698.928183ms 700.211266ms 703.118196ms 703.132352ms 703.606564ms 703.860972ms 705.795747ms 705.854407ms 711.538621ms 711.640709ms 711.657258ms 712.231426ms 712.600081ms 712.819823ms 713.758744ms 714.152536ms 715.102393ms 715.105994ms 715.710404ms 716.101367ms 717.56756ms 718.397063ms 719.516048ms 719.607933ms 719.801813ms 720.003536ms 720.071893ms 722.271148ms 724.168543ms 724.282169ms 724.436684ms 724.43736ms 724.584364ms 724.64859ms 725.120339ms 728.127138ms 730.77257ms 731.058769ms 731.209909ms 732.894618ms 733.676904ms 735.696636ms 736.084155ms 736.345206ms 736.436237ms 738.379638ms 738.686239ms 739.980455ms 741.636404ms 742.005281ms 742.359048ms 742.391428ms 742.932242ms 743.145411ms 744.573925ms 745.319992ms 747.619227ms 748.055493ms 748.206162ms 749.162748ms 749.710425ms 750.473085ms 752.469501ms 752.749915ms 753.435721ms 753.576432ms 754.214791ms 755.00529ms 756.026772ms 759.008985ms 760.438348ms 762.216295ms 763.078733ms 766.36767ms 766.728797ms 767.843331ms 768.311969ms 768.656176ms 772.128333ms 775.428459ms 776.763621ms 777.615619ms 780.444916ms 781.776943ms 781.853626ms 783.146429ms 784.070027ms 784.386719ms 784.408034ms 785.765845ms 787.724295ms 788.049096ms 789.51402ms 790.833639ms 792.964226ms 793.642285ms 794.08161ms 795.193819ms 796.01248ms 800.285229ms 800.393302ms 802.009678ms 802.335092ms 802.561927ms 804.176073ms 809.521756ms 810.095825ms 811.260771ms 818.050666ms 819.905575ms 820.631605ms 820.887684ms 823.367556ms 825.751915ms 829.523646ms 830.301069ms 831.174397ms 837.968853ms 838.794616ms 841.949998ms 843.582886ms 843.857217ms 843.98367ms 847.972131ms 848.064269ms 861.55469ms 864.838997ms 867.130853ms 883.959381ms 884.362507ms 888.340249ms 889.534325ms 903.384473ms 905.613964ms 908.864508ms 919.50835ms 930.748236ms 944.78043ms 947.21601ms 951.727019ms 962.140996ms 991.340483ms 995.832118ms 1.023783517s 1.033929696s 1.04054842s 1.063021648s 1.073471195s 1.074923208s 1.104092472s 1.104192847s 1.104342205s] Apr 10 21:23:02.042: INFO: 50 %ile: 742.932242ms Apr 10 21:23:02.042: INFO: 90 %ile: 903.384473ms Apr 10 21:23:02.042: INFO: 99 %ile: 1.104192847s Apr 10 21:23:02.042: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:23:02.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9221" for this suite. • [SLOW TEST:13.594 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":43,"skipped":721,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:23:02.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:23:02.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-326b9801-8e3a-49e5-93bd-6077fce7e4be" in namespace "projected-8574" to be "success or failure" Apr 10 21:23:02.183: INFO: Pod "downwardapi-volume-326b9801-8e3a-49e5-93bd-6077fce7e4be": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024733ms Apr 10 21:23:04.187: INFO: Pod "downwardapi-volume-326b9801-8e3a-49e5-93bd-6077fce7e4be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015581792s Apr 10 21:23:06.190: INFO: Pod "downwardapi-volume-326b9801-8e3a-49e5-93bd-6077fce7e4be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019331306s STEP: Saw pod success Apr 10 21:23:06.190: INFO: Pod "downwardapi-volume-326b9801-8e3a-49e5-93bd-6077fce7e4be" satisfied condition "success or failure" Apr 10 21:23:06.193: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-326b9801-8e3a-49e5-93bd-6077fce7e4be container client-container: STEP: delete the pod Apr 10 21:23:06.224: INFO: Waiting for pod downwardapi-volume-326b9801-8e3a-49e5-93bd-6077fce7e4be to disappear Apr 10 21:23:06.228: INFO: Pod downwardapi-volume-326b9801-8e3a-49e5-93bd-6077fce7e4be no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:23:06.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8574" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:23:06.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:23:06.725: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:23:08.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150586, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150586, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150586, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150586, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:23:12.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:13.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:14.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:15.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:16.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:17.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:18.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:19.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:20.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 10 21:23:21.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:23:21.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6382" for this suite. STEP: Destroying namespace "webhook-6382-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.099 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":45,"skipped":777,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:23:21.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 10 21:23:21.539: INFO: Waiting up to 5m0s for pod "pod-35a40a26-957a-40fc-97ed-a878cce7b994" in namespace "emptydir-2186" to be "success or failure" Apr 10 21:23:21.559: INFO: Pod "pod-35a40a26-957a-40fc-97ed-a878cce7b994": Phase="Pending", Reason="", readiness=false. Elapsed: 19.983128ms Apr 10 21:23:23.602: INFO: Pod "pod-35a40a26-957a-40fc-97ed-a878cce7b994": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062566924s Apr 10 21:23:25.606: INFO: Pod "pod-35a40a26-957a-40fc-97ed-a878cce7b994": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066651294s STEP: Saw pod success Apr 10 21:23:25.606: INFO: Pod "pod-35a40a26-957a-40fc-97ed-a878cce7b994" satisfied condition "success or failure" Apr 10 21:23:25.609: INFO: Trying to get logs from node jerma-worker pod pod-35a40a26-957a-40fc-97ed-a878cce7b994 container test-container: STEP: delete the pod Apr 10 21:23:25.649: INFO: Waiting for pod pod-35a40a26-957a-40fc-97ed-a878cce7b994 to disappear Apr 10 21:23:25.653: INFO: Pod pod-35a40a26-957a-40fc-97ed-a878cce7b994 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:23:25.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2186" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":782,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:23:25.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-91084635-6b9f-4f5f-9a01-e799f50b0672 STEP: Creating a pod to test consume secrets Apr 10 21:23:25.721: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9fe17327-65b9-4142-b163-b02967c330f1" in namespace "projected-1211" to be "success or failure" Apr 10 21:23:25.768: INFO: Pod "pod-projected-secrets-9fe17327-65b9-4142-b163-b02967c330f1": Phase="Pending", Reason="", readiness=false. Elapsed: 47.329404ms Apr 10 21:23:27.772: INFO: Pod "pod-projected-secrets-9fe17327-65b9-4142-b163-b02967c330f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050894399s Apr 10 21:23:29.776: INFO: Pod "pod-projected-secrets-9fe17327-65b9-4142-b163-b02967c330f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054792378s STEP: Saw pod success Apr 10 21:23:29.776: INFO: Pod "pod-projected-secrets-9fe17327-65b9-4142-b163-b02967c330f1" satisfied condition "success or failure" Apr 10 21:23:29.778: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-9fe17327-65b9-4142-b163-b02967c330f1 container secret-volume-test: STEP: delete the pod Apr 10 21:23:29.799: INFO: Waiting for pod pod-projected-secrets-9fe17327-65b9-4142-b163-b02967c330f1 to disappear Apr 10 21:23:29.803: INFO: Pod pod-projected-secrets-9fe17327-65b9-4142-b163-b02967c330f1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:23:29.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1211" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":797,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:23:29.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-109 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-109;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-109 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-109;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-109.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-109.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-109.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-109.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-109.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-109.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-109.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-109.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-109.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-109.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-109.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 221.142.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.142.221_udp@PTR;check="$$(dig +tcp +noall +answer +search 221.142.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.142.221_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-109 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-109;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-109 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-109;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-109.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-109.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-109.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-109.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-109.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-109.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-109.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-109.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-109.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-109.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-109.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-109.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 221.142.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.142.221_udp@PTR;check="$$(dig +tcp +noall +answer +search 221.142.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.142.221_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 21:23:36.081: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.085: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.088: INFO: Unable to read wheezy_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.090: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.093: INFO: Unable to read wheezy_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.095: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.099: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.102: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.123: INFO: Unable to read jessie_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.126: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.129: INFO: Unable to read jessie_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.131: INFO: Unable to read jessie_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.135: INFO: Unable to read jessie_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.138: INFO: Unable to read jessie_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.141: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.144: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:36.164: INFO: Lookups using dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-109 wheezy_tcp@dns-test-service.dns-109 wheezy_udp@dns-test-service.dns-109.svc wheezy_tcp@dns-test-service.dns-109.svc wheezy_udp@_http._tcp.dns-test-service.dns-109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-109 jessie_tcp@dns-test-service.dns-109 jessie_udp@dns-test-service.dns-109.svc jessie_tcp@dns-test-service.dns-109.svc jessie_udp@_http._tcp.dns-test-service.dns-109.svc jessie_tcp@_http._tcp.dns-test-service.dns-109.svc] Apr 10 21:23:41.169: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.172: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.183: INFO: Unable to read wheezy_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.186: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.190: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.193: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.210: INFO: Unable to read jessie_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.212: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.214: INFO: Unable to read jessie_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.216: INFO: Unable to read jessie_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.218: INFO: Unable to read jessie_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.223: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.225: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:41.241: INFO: Lookups using dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-109 wheezy_tcp@dns-test-service.dns-109 wheezy_udp@dns-test-service.dns-109.svc wheezy_tcp@dns-test-service.dns-109.svc wheezy_udp@_http._tcp.dns-test-service.dns-109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-109 jessie_tcp@dns-test-service.dns-109 jessie_udp@dns-test-service.dns-109.svc jessie_tcp@dns-test-service.dns-109.svc jessie_udp@_http._tcp.dns-test-service.dns-109.svc jessie_tcp@_http._tcp.dns-test-service.dns-109.svc] Apr 10 21:23:46.169: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.172: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.178: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.181: INFO: Unable to read wheezy_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.187: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.190: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.213: INFO: Unable to read jessie_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.216: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.219: INFO: Unable to read jessie_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.222: INFO: Unable to read jessie_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.225: INFO: Unable to read jessie_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.227: INFO: Unable to read jessie_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.230: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.233: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:46.249: INFO: Lookups using dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-109 wheezy_tcp@dns-test-service.dns-109 wheezy_udp@dns-test-service.dns-109.svc wheezy_tcp@dns-test-service.dns-109.svc wheezy_udp@_http._tcp.dns-test-service.dns-109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-109 jessie_tcp@dns-test-service.dns-109 jessie_udp@dns-test-service.dns-109.svc jessie_tcp@dns-test-service.dns-109.svc jessie_udp@_http._tcp.dns-test-service.dns-109.svc jessie_tcp@_http._tcp.dns-test-service.dns-109.svc] Apr 10 21:23:51.169: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.173: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.184: INFO: Unable to read wheezy_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.187: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.191: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.194: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.221: INFO: Unable to read jessie_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.223: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.226: INFO: Unable to read jessie_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.230: INFO: Unable to read jessie_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.233: INFO: Unable to read jessie_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.236: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.239: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:51.257: INFO: Lookups using dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-109 wheezy_tcp@dns-test-service.dns-109 wheezy_udp@dns-test-service.dns-109.svc wheezy_tcp@dns-test-service.dns-109.svc wheezy_udp@_http._tcp.dns-test-service.dns-109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-109 jessie_tcp@dns-test-service.dns-109 jessie_udp@dns-test-service.dns-109.svc jessie_tcp@dns-test-service.dns-109.svc jessie_udp@_http._tcp.dns-test-service.dns-109.svc jessie_tcp@_http._tcp.dns-test-service.dns-109.svc] Apr 10 21:23:56.169: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.172: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.179: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.183: INFO: Unable to read wheezy_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.186: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.190: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.193: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.216: INFO: Unable to read jessie_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.219: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.222: INFO: Unable to read jessie_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.225: INFO: Unable to read jessie_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.228: INFO: Unable to read jessie_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.231: INFO: Unable to read jessie_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.234: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.237: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:23:56.257: INFO: Lookups using dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-109 wheezy_tcp@dns-test-service.dns-109 wheezy_udp@dns-test-service.dns-109.svc wheezy_tcp@dns-test-service.dns-109.svc wheezy_udp@_http._tcp.dns-test-service.dns-109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-109 jessie_tcp@dns-test-service.dns-109 jessie_udp@dns-test-service.dns-109.svc jessie_tcp@dns-test-service.dns-109.svc jessie_udp@_http._tcp.dns-test-service.dns-109.svc jessie_tcp@_http._tcp.dns-test-service.dns-109.svc] Apr 10 21:24:01.169: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.172: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.175: INFO: Unable to read wheezy_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.178: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.181: INFO: Unable to read wheezy_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.187: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.190: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.210: INFO: Unable to read jessie_udp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.213: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.216: INFO: Unable to read jessie_udp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.218: INFO: Unable to read jessie_tcp@dns-test-service.dns-109 from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.221: INFO: Unable to read jessie_udp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.224: INFO: Unable to read jessie_tcp@dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.226: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.228: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-109.svc from pod dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50: the server could not find the requested resource (get pods dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50) Apr 10 21:24:01.245: INFO: Lookups using dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-109 wheezy_tcp@dns-test-service.dns-109 wheezy_udp@dns-test-service.dns-109.svc wheezy_tcp@dns-test-service.dns-109.svc wheezy_udp@_http._tcp.dns-test-service.dns-109.svc wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-109 jessie_tcp@dns-test-service.dns-109 jessie_udp@dns-test-service.dns-109.svc jessie_tcp@dns-test-service.dns-109.svc jessie_udp@_http._tcp.dns-test-service.dns-109.svc jessie_tcp@_http._tcp.dns-test-service.dns-109.svc] Apr 10 21:24:06.272: INFO: DNS probes using dns-109/dns-test-8716c25f-cc4f-4d76-8aaf-301d1bc21a50 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:24:06.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-109" for this suite. • [SLOW TEST:37.075 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":48,"skipped":830,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:24:06.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-1bde9500-6c73-49a8-9bb0-ccf80a558783 STEP: Creating a pod to test consume secrets Apr 10 21:24:07.005: INFO: Waiting up to 5m0s for pod "pod-secrets-ecb80f12-2a6a-4efd-8ed7-f7087ce50f1c" in namespace "secrets-9724" to be "success or failure" Apr 10 21:24:07.022: INFO: Pod "pod-secrets-ecb80f12-2a6a-4efd-8ed7-f7087ce50f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.525315ms Apr 10 21:24:09.069: INFO: Pod "pod-secrets-ecb80f12-2a6a-4efd-8ed7-f7087ce50f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064137299s Apr 10 21:24:11.073: INFO: Pod "pod-secrets-ecb80f12-2a6a-4efd-8ed7-f7087ce50f1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068073453s STEP: Saw pod success Apr 10 21:24:11.073: INFO: Pod "pod-secrets-ecb80f12-2a6a-4efd-8ed7-f7087ce50f1c" satisfied condition "success or failure" Apr 10 21:24:11.076: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ecb80f12-2a6a-4efd-8ed7-f7087ce50f1c container secret-volume-test: STEP: delete the pod Apr 10 21:24:11.094: INFO: Waiting for pod pod-secrets-ecb80f12-2a6a-4efd-8ed7-f7087ce50f1c to disappear Apr 10 21:24:11.098: INFO: Pod pod-secrets-ecb80f12-2a6a-4efd-8ed7-f7087ce50f1c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:24:11.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9724" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":835,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:24:11.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 10 21:24:11.188: INFO: Waiting up to 5m0s for pod "client-containers-db84b6a7-1d40-40d8-8a46-bf4802724a9e" in namespace "containers-1057" to be "success or failure" Apr 10 21:24:11.192: INFO: Pod "client-containers-db84b6a7-1d40-40d8-8a46-bf4802724a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.266039ms Apr 10 21:24:13.195: INFO: Pod "client-containers-db84b6a7-1d40-40d8-8a46-bf4802724a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006634071s Apr 10 21:24:15.199: INFO: Pod "client-containers-db84b6a7-1d40-40d8-8a46-bf4802724a9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01076373s STEP: Saw pod success Apr 10 21:24:15.199: INFO: Pod "client-containers-db84b6a7-1d40-40d8-8a46-bf4802724a9e" satisfied condition "success or failure" Apr 10 21:24:15.202: INFO: Trying to get logs from node jerma-worker pod client-containers-db84b6a7-1d40-40d8-8a46-bf4802724a9e container test-container: STEP: delete the pod Apr 10 21:24:15.244: INFO: Waiting for pod client-containers-db84b6a7-1d40-40d8-8a46-bf4802724a9e to disappear Apr 10 21:24:15.254: INFO: Pod client-containers-db84b6a7-1d40-40d8-8a46-bf4802724a9e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:24:15.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1057" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":854,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:24:15.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 10 21:24:19.344: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 10 21:24:34.444: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:24:34.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7758" for this suite. • [SLOW TEST:19.192 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":51,"skipped":859,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:24:34.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:24:45.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1746" for this suite. • [SLOW TEST:11.150 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":52,"skipped":866,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:24:45.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-5803fdf6-0af1-4c61-abb5-915990a09354 STEP: Creating a pod to test consume configMaps Apr 10 21:24:45.676: INFO: Waiting up to 5m0s for pod "pod-configmaps-08a7a3bc-0e5d-4239-b83d-850f6ec34a9d" in namespace "configmap-9138" to be "success or failure" Apr 10 21:24:45.680: INFO: Pod "pod-configmaps-08a7a3bc-0e5d-4239-b83d-850f6ec34a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.659879ms Apr 10 21:24:47.684: INFO: Pod "pod-configmaps-08a7a3bc-0e5d-4239-b83d-850f6ec34a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007665618s Apr 10 21:24:49.688: INFO: Pod "pod-configmaps-08a7a3bc-0e5d-4239-b83d-850f6ec34a9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011849381s STEP: Saw pod success Apr 10 21:24:49.688: INFO: Pod "pod-configmaps-08a7a3bc-0e5d-4239-b83d-850f6ec34a9d" satisfied condition "success or failure" Apr 10 21:24:49.691: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-08a7a3bc-0e5d-4239-b83d-850f6ec34a9d container configmap-volume-test: STEP: delete the pod Apr 10 21:24:49.723: INFO: Waiting for pod pod-configmaps-08a7a3bc-0e5d-4239-b83d-850f6ec34a9d to disappear Apr 10 21:24:49.727: INFO: Pod pod-configmaps-08a7a3bc-0e5d-4239-b83d-850f6ec34a9d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:24:49.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9138" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":878,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:24:49.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:24:53.915: INFO: Waiting up to 5m0s for pod "client-envvars-bbaec176-7d43-41b6-b7c5-f606a756cacb" in namespace "pods-7907" to be "success or failure" Apr 10 21:24:53.950: INFO: Pod "client-envvars-bbaec176-7d43-41b6-b7c5-f606a756cacb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.436014ms Apr 10 21:24:55.979: INFO: Pod "client-envvars-bbaec176-7d43-41b6-b7c5-f606a756cacb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064127608s Apr 10 21:24:57.984: INFO: Pod "client-envvars-bbaec176-7d43-41b6-b7c5-f606a756cacb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068514835s STEP: Saw pod success Apr 10 21:24:57.984: INFO: Pod "client-envvars-bbaec176-7d43-41b6-b7c5-f606a756cacb" satisfied condition "success or failure" Apr 10 21:24:57.987: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-bbaec176-7d43-41b6-b7c5-f606a756cacb container env3cont: STEP: delete the pod Apr 10 21:24:58.018: INFO: Waiting for pod client-envvars-bbaec176-7d43-41b6-b7c5-f606a756cacb to disappear Apr 10 21:24:58.027: INFO: Pod client-envvars-bbaec176-7d43-41b6-b7c5-f606a756cacb no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:24:58.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7907" for this suite. • [SLOW TEST:8.299 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":886,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:24:58.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:24:58.817: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:25:00.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150698, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150698, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150698, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150698, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:25:03.870: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:25:13.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3106" for this suite. STEP: Destroying namespace "webhook-3106-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.068 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":55,"skipped":905,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:25:14.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:25:14.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5563" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":56,"skipped":924,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:25:14.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:25:15.363: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:25:17.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150715, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150715, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150715, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150715, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:25:20.410: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:25:32.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6491" for this suite. STEP: Destroying namespace "webhook-6491-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.561 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":57,"skipped":924,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:25:32.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-6fff5142-67c0-4307-a32c-a580b96dc4e0 in namespace container-probe-2107 Apr 10 21:25:36.860: INFO: Started pod busybox-6fff5142-67c0-4307-a32c-a580b96dc4e0 in namespace container-probe-2107 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 21:25:36.863: INFO: Initial restart count of pod busybox-6fff5142-67c0-4307-a32c-a580b96dc4e0 is 0 Apr 10 21:26:26.974: INFO: Restart count of pod container-probe-2107/busybox-6fff5142-67c0-4307-a32c-a580b96dc4e0 is now 1 (50.110972151s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:26:26.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2107" for this suite. • [SLOW TEST:54.287 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":949,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:26:27.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:26:27.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6287' Apr 10 21:26:30.446: INFO: stderr: "" Apr 10 21:26:30.446: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 10 21:26:30.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6287' Apr 10 21:26:30.758: INFO: stderr: "" Apr 10 21:26:30.758: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 10 21:26:31.762: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:26:31.762: INFO: Found 0 / 1 Apr 10 21:26:32.790: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:26:32.790: INFO: Found 0 / 1 Apr 10 21:26:33.763: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:26:33.763: INFO: Found 1 / 1 Apr 10 21:26:33.763: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 10 21:26:33.766: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:26:33.766: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 10 21:26:33.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-shnfx --namespace=kubectl-6287' Apr 10 21:26:33.887: INFO: stderr: "" Apr 10 21:26:33.887: INFO: stdout: "Name: agnhost-master-shnfx\nNamespace: kubectl-6287\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Fri, 10 Apr 2020 21:26:30 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.65\nIPs:\n IP: 10.244.1.65\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://5882101da553aabd2cbd3725f65fdaf64ea385437ab37170448ddeeb7da7cd00\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 10 Apr 2020 21:26:33 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-f45zp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-f45zp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-f45zp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-6287/agnhost-master-shnfx to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 0s kubelet, jerma-worker Started container agnhost-master\n" Apr 10 21:26:33.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6287' Apr 10 21:26:34.020: INFO: stderr: "" Apr 10 21:26:34.020: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6287\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-shnfx\n" Apr 10 21:26:34.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6287' Apr 10 21:26:34.116: INFO: stderr: "" Apr 10 21:26:34.116: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6287\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.98.31.223\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.65:6379\nSession Affinity: None\nEvents: \n" Apr 10 21:26:34.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 10 21:26:34.250: INFO: stderr: "" Apr 10 21:26:34.250: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Fri, 10 Apr 2020 21:26:24 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 10 Apr 2020 21:23:08 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 10 Apr 2020 21:23:08 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 10 Apr 2020 21:23:08 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 10 Apr 2020 21:23:08 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 26d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 26d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 26d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 10 21:26:34.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6287' Apr 10 21:26:34.348: INFO: stderr: "" Apr 10 21:26:34.348: INFO: stdout: "Name: kubectl-6287\nLabels: e2e-framework=kubectl\n e2e-run=876ca676-7ff4-4a52-a92f-2d64cfb906bd\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:26:34.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6287" for this suite. • [SLOW TEST:7.334 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":59,"skipped":969,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:26:34.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3451.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3451.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3451.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 21:26:40.489: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:40.515: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:40.519: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:40.522: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:40.547: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:40.549: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:40.552: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:40.555: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:40.560: INFO: Lookups using dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local] Apr 10 21:26:45.566: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:45.570: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:45.575: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:45.578: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:45.587: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:45.589: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:45.592: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:45.595: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:45.601: INFO: Lookups using dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local] Apr 10 21:26:50.571: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:50.575: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:50.577: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:50.580: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:50.589: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:50.592: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:50.595: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:50.598: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:50.604: INFO: Lookups using dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local] Apr 10 21:26:55.566: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:55.570: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:55.573: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:55.577: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:55.587: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:55.591: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:55.594: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:55.597: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:26:55.603: INFO: Lookups using dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local] Apr 10 21:27:00.566: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:00.569: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:00.573: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:00.576: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:00.588: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:00.591: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:00.594: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:00.597: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:00.603: INFO: Lookups using dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local] Apr 10 21:27:05.570: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:05.574: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:05.577: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:05.580: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:05.588: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:05.591: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:05.594: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:05.597: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local from pod dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4: the server could not find the requested resource (get pods dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4) Apr 10 21:27:05.603: INFO: Lookups using dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3451.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3451.svc.cluster.local jessie_udp@dns-test-service-2.dns-3451.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3451.svc.cluster.local] Apr 10 21:27:10.601: INFO: DNS probes using dns-3451/dns-test-1c70cd91-4a94-4c9a-b505-6238298710d4 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:27:10.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3451" for this suite. • [SLOW TEST:36.365 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":60,"skipped":979,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:27:10.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2142 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2142 I0410 21:27:11.503437 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2142, replica count: 2 I0410 21:27:14.553900 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 21:27:17.554094 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 10 21:27:17.554: INFO: Creating new exec pod Apr 10 21:27:22.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2142 execpodpzv7c -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 10 21:27:22.815: INFO: stderr: "I0410 21:27:22.715871 542 log.go:172] (0xc0000fb550) (0xc0006b7cc0) Create stream\nI0410 21:27:22.715925 542 log.go:172] (0xc0000fb550) (0xc0006b7cc0) Stream added, broadcasting: 1\nI0410 21:27:22.718629 542 log.go:172] (0xc0000fb550) Reply frame received for 1\nI0410 21:27:22.718686 542 log.go:172] (0xc0000fb550) (0xc000680640) Create stream\nI0410 21:27:22.718701 542 log.go:172] (0xc0000fb550) (0xc000680640) Stream added, broadcasting: 3\nI0410 21:27:22.720024 542 log.go:172] (0xc0000fb550) Reply frame received for 3\nI0410 21:27:22.720060 542 log.go:172] (0xc0000fb550) (0xc0006b7d60) Create stream\nI0410 21:27:22.720082 542 log.go:172] (0xc0000fb550) (0xc0006b7d60) Stream added, broadcasting: 5\nI0410 21:27:22.721534 542 log.go:172] (0xc0000fb550) Reply frame received for 5\nI0410 21:27:22.809019 542 log.go:172] (0xc0000fb550) Data frame received for 5\nI0410 21:27:22.809060 542 log.go:172] (0xc0006b7d60) (5) Data frame handling\nI0410 21:27:22.809090 542 log.go:172] (0xc0006b7d60) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0410 21:27:22.809519 542 log.go:172] (0xc0000fb550) Data frame received for 5\nI0410 21:27:22.809536 542 log.go:172] (0xc0006b7d60) (5) Data frame handling\nI0410 21:27:22.809543 542 log.go:172] (0xc0006b7d60) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0410 21:27:22.809861 542 log.go:172] (0xc0000fb550) Data frame received for 3\nI0410 21:27:22.809887 542 log.go:172] (0xc000680640) (3) Data frame handling\nI0410 21:27:22.809912 542 log.go:172] (0xc0000fb550) Data frame received for 5\nI0410 21:27:22.809938 542 log.go:172] (0xc0006b7d60) (5) Data frame handling\nI0410 21:27:22.811308 542 log.go:172] (0xc0000fb550) Data frame received for 1\nI0410 21:27:22.811321 542 log.go:172] (0xc0006b7cc0) (1) Data frame handling\nI0410 21:27:22.811327 542 log.go:172] (0xc0006b7cc0) (1) Data frame sent\nI0410 21:27:22.811347 542 log.go:172] (0xc0000fb550) (0xc0006b7cc0) Stream removed, broadcasting: 1\nI0410 21:27:22.811361 542 log.go:172] (0xc0000fb550) Go away received\nI0410 21:27:22.811719 542 log.go:172] (0xc0000fb550) (0xc0006b7cc0) Stream removed, broadcasting: 1\nI0410 21:27:22.811738 542 log.go:172] (0xc0000fb550) (0xc000680640) Stream removed, broadcasting: 3\nI0410 21:27:22.811753 542 log.go:172] (0xc0000fb550) (0xc0006b7d60) Stream removed, broadcasting: 5\n" Apr 10 21:27:22.816: INFO: stdout: "" Apr 10 21:27:22.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2142 execpodpzv7c -- /bin/sh -x -c nc -zv -t -w 2 10.99.147.182 80' Apr 10 21:27:23.036: INFO: stderr: "I0410 21:27:22.948895 564 log.go:172] (0xc000728a50) (0xc0008c2000) Create stream\nI0410 21:27:22.948961 564 log.go:172] (0xc000728a50) (0xc0008c2000) Stream added, broadcasting: 1\nI0410 21:27:22.952145 564 log.go:172] (0xc000728a50) Reply frame received for 1\nI0410 21:27:22.952209 564 log.go:172] (0xc000728a50) (0xc000952000) Create stream\nI0410 21:27:22.952229 564 log.go:172] (0xc000728a50) (0xc000952000) Stream added, broadcasting: 3\nI0410 21:27:22.953798 564 log.go:172] (0xc000728a50) Reply frame received for 3\nI0410 21:27:22.953841 564 log.go:172] (0xc000728a50) (0xc0008c2140) Create stream\nI0410 21:27:22.953855 564 log.go:172] (0xc000728a50) (0xc0008c2140) Stream added, broadcasting: 5\nI0410 21:27:22.955116 564 log.go:172] (0xc000728a50) Reply frame received for 5\nI0410 21:27:23.024536 564 log.go:172] (0xc000728a50) Data frame received for 3\nI0410 21:27:23.024584 564 log.go:172] (0xc000952000) (3) Data frame handling\nI0410 21:27:23.024614 564 log.go:172] (0xc000728a50) Data frame received for 5\nI0410 21:27:23.024628 564 log.go:172] (0xc0008c2140) (5) Data frame handling\nI0410 21:27:23.024641 564 log.go:172] (0xc0008c2140) (5) Data frame sent\nI0410 21:27:23.024656 564 log.go:172] (0xc000728a50) Data frame received for 5\n+ nc -zv -t -w 2 10.99.147.182 80\nConnection to 10.99.147.182 80 port [tcp/http] succeeded!\nI0410 21:27:23.024671 564 log.go:172] (0xc0008c2140) (5) Data frame handling\nI0410 21:27:23.025864 564 log.go:172] (0xc000728a50) Data frame received for 1\nI0410 21:27:23.025893 564 log.go:172] (0xc0008c2000) (1) Data frame handling\nI0410 21:27:23.025907 564 log.go:172] (0xc0008c2000) (1) Data frame sent\nI0410 21:27:23.025920 564 log.go:172] (0xc000728a50) (0xc0008c2000) Stream removed, broadcasting: 1\nI0410 21:27:23.025942 564 log.go:172] (0xc000728a50) Go away received\nI0410 21:27:23.026319 564 log.go:172] (0xc000728a50) (0xc0008c2000) Stream removed, broadcasting: 1\nI0410 21:27:23.026341 564 log.go:172] (0xc000728a50) (0xc000952000) Stream removed, broadcasting: 3\nI0410 21:27:23.026350 564 log.go:172] (0xc000728a50) (0xc0008c2140) Stream removed, broadcasting: 5\n" Apr 10 21:27:23.036: INFO: stdout: "" Apr 10 21:27:23.036: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:27:23.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2142" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.366 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":61,"skipped":1001,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:27:23.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:27:23.130: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 10 21:27:25.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-754 create -f -' Apr 10 21:27:28.190: INFO: stderr: "" Apr 10 21:27:28.190: INFO: stdout: "e2e-test-crd-publish-openapi-7965-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 10 21:27:28.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-754 delete e2e-test-crd-publish-openapi-7965-crds test-foo' Apr 10 21:27:28.288: INFO: stderr: "" Apr 10 21:27:28.288: INFO: stdout: "e2e-test-crd-publish-openapi-7965-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 10 21:27:28.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-754 apply -f -' Apr 10 21:27:28.712: INFO: stderr: "" Apr 10 21:27:28.712: INFO: stdout: "e2e-test-crd-publish-openapi-7965-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 10 21:27:28.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-754 delete e2e-test-crd-publish-openapi-7965-crds test-foo' Apr 10 21:27:28.922: INFO: stderr: "" Apr 10 21:27:28.922: INFO: stdout: "e2e-test-crd-publish-openapi-7965-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 10 21:27:28.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-754 create -f -' Apr 10 21:27:29.190: INFO: rc: 1 Apr 10 21:27:29.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-754 apply -f -' Apr 10 21:27:29.420: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 10 21:27:29.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-754 create -f -' Apr 10 21:27:29.643: INFO: rc: 1 Apr 10 21:27:29.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-754 apply -f -' Apr 10 21:27:29.872: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 10 21:27:29.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7965-crds' Apr 10 21:27:30.134: INFO: stderr: "" Apr 10 21:27:30.134: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7965-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 10 21:27:30.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7965-crds.metadata' Apr 10 21:27:30.369: INFO: stderr: "" Apr 10 21:27:30.369: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7965-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 10 21:27:30.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7965-crds.spec' Apr 10 21:27:30.607: INFO: stderr: "" Apr 10 21:27:30.607: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7965-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 10 21:27:30.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7965-crds.spec.bars' Apr 10 21:27:30.908: INFO: stderr: "" Apr 10 21:27:30.908: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7965-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 10 21:27:30.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7965-crds.spec.bars2' Apr 10 21:27:31.302: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:27:34.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-754" for this suite. • [SLOW TEST:11.148 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":62,"skipped":1021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:27:34.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:27:34.877: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:27:36.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150854, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150854, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150854, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150854, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:27:39.952: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:27:40.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7304" for this suite. STEP: Destroying namespace "webhook-7304-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.966 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":63,"skipped":1058,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:27:40.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:27:41.017: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:27:43.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150861, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150861, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150861, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722150861, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:27:46.059: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 10 21:27:50.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5027 to-be-attached-pod -i -c=container1' Apr 10 21:27:50.225: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:27:50.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5027" for this suite. STEP: Destroying namespace "webhook-5027-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.161 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":64,"skipped":1079,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:27:50.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:27:54.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1274" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1081,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:27:54.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3446.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3446.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 21:28:00.597: INFO: DNS probes using dns-3446/dns-test-71754413-a4d9-43d1-983f-52d2a8b98f9d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:28:00.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3446" for this suite. • [SLOW TEST:6.269 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":66,"skipped":1101,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:28:00.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 10 21:28:01.044: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 10 21:28:01.054: INFO: Waiting for terminating namespaces to be deleted... Apr 10 21:28:01.057: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 10 21:28:01.061: INFO: busybox-scheduling-826608c4-2000-4194-8229-ec2f3265ac9e from kubelet-test-1274 started at 2020-04-10 21:27:50 +0000 UTC (1 container statuses recorded) Apr 10 21:28:01.061: INFO: Container busybox-scheduling-826608c4-2000-4194-8229-ec2f3265ac9e ready: true, restart count 0 Apr 10 21:28:01.061: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:28:01.061: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 21:28:01.061: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:28:01.061: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 21:28:01.061: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 10 21:28:01.079: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 10 21:28:01.079: INFO: Container kube-hunter ready: false, restart count 0 Apr 10 21:28:01.079: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 10 21:28:01.079: INFO: Container kube-bench ready: false, restart count 0 Apr 10 21:28:01.079: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:28:01.079: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 21:28:01.079: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 21:28:01.079: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0b22ec0c-fdc1-4a65-a2f2-454c6f75b516 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-0b22ec0c-fdc1-4a65-a2f2-454c6f75b516 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0b22ec0c-fdc1-4a65-a2f2-454c6f75b516 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:28:17.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9690" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.553 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":67,"skipped":1115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:28:17.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 10 21:28:21.884: INFO: Successfully updated pod "labelsupdate92f3d0b5-1998-4fd8-8b8b-a0d7703894a6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:28:23.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4837" for this suite. • [SLOW TEST:6.649 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:28:23.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-793876de-7c39-4da3-8f7f-e6cdb698b74e STEP: Creating a pod to test consume configMaps Apr 10 21:28:24.251: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca323549-df28-43ab-bbbf-a1313b826c29" in namespace "configmap-5959" to be "success or failure" Apr 10 21:28:24.318: INFO: Pod "pod-configmaps-ca323549-df28-43ab-bbbf-a1313b826c29": Phase="Pending", Reason="", readiness=false. Elapsed: 66.172293ms Apr 10 21:28:26.354: INFO: Pod "pod-configmaps-ca323549-df28-43ab-bbbf-a1313b826c29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102863569s Apr 10 21:28:28.358: INFO: Pod "pod-configmaps-ca323549-df28-43ab-bbbf-a1313b826c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106781079s STEP: Saw pod success Apr 10 21:28:28.358: INFO: Pod "pod-configmaps-ca323549-df28-43ab-bbbf-a1313b826c29" satisfied condition "success or failure" Apr 10 21:28:28.361: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ca323549-df28-43ab-bbbf-a1313b826c29 container configmap-volume-test: STEP: delete the pod Apr 10 21:28:28.399: INFO: Waiting for pod pod-configmaps-ca323549-df28-43ab-bbbf-a1313b826c29 to disappear Apr 10 21:28:28.411: INFO: Pod pod-configmaps-ca323549-df28-43ab-bbbf-a1313b826c29 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:28:28.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5959" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1189,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:28:28.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 10 21:28:28.479: INFO: Waiting up to 5m0s for pod "downward-api-d59f3a7f-6ecc-4ad3-a0bc-8a2e152033ed" in namespace "downward-api-3928" to be "success or failure" Apr 10 21:28:28.483: INFO: Pod "downward-api-d59f3a7f-6ecc-4ad3-a0bc-8a2e152033ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.911375ms Apr 10 21:28:30.487: INFO: Pod "downward-api-d59f3a7f-6ecc-4ad3-a0bc-8a2e152033ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007466419s Apr 10 21:28:32.492: INFO: Pod "downward-api-d59f3a7f-6ecc-4ad3-a0bc-8a2e152033ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012351019s STEP: Saw pod success Apr 10 21:28:32.492: INFO: Pod "downward-api-d59f3a7f-6ecc-4ad3-a0bc-8a2e152033ed" satisfied condition "success or failure" Apr 10 21:28:32.494: INFO: Trying to get logs from node jerma-worker2 pod downward-api-d59f3a7f-6ecc-4ad3-a0bc-8a2e152033ed container dapi-container: STEP: delete the pod Apr 10 21:28:32.515: INFO: Waiting for pod downward-api-d59f3a7f-6ecc-4ad3-a0bc-8a2e152033ed to disappear Apr 10 21:28:32.519: INFO: Pod downward-api-d59f3a7f-6ecc-4ad3-a0bc-8a2e152033ed no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:28:32.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3928" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1209,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:28:32.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:28:36.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5096" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":71,"skipped":1221,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:28:36.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 10 21:28:36.806: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4001 /api/v1/namespaces/watch-4001/configmaps/e2e-watch-test-watch-closed 62ea56f5-b902-46d0-98da-d37b523dfd41 7039548 0 2020-04-10 21:28:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 21:28:36.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4001 /api/v1/namespaces/watch-4001/configmaps/e2e-watch-test-watch-closed 62ea56f5-b902-46d0-98da-d37b523dfd41 7039549 0 2020-04-10 21:28:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 10 21:28:37.003: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4001 /api/v1/namespaces/watch-4001/configmaps/e2e-watch-test-watch-closed 62ea56f5-b902-46d0-98da-d37b523dfd41 7039550 0 2020-04-10 21:28:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 21:28:37.003: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4001 /api/v1/namespaces/watch-4001/configmaps/e2e-watch-test-watch-closed 62ea56f5-b902-46d0-98da-d37b523dfd41 7039551 0 2020-04-10 21:28:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:28:37.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4001" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":72,"skipped":1242,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:28:37.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-de65cf73-bcf8-47af-92fb-26b81634facb STEP: Creating secret with name secret-projected-all-test-volume-808b978f-5224-443a-a402-0d32bb2161c9 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 10 21:28:37.216: INFO: Waiting up to 5m0s for pod "projected-volume-774e489b-c7ee-45f5-9603-db511dc05c4b" in namespace "projected-4516" to be "success or failure" Apr 10 21:28:37.280: INFO: Pod "projected-volume-774e489b-c7ee-45f5-9603-db511dc05c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 64.499105ms Apr 10 21:28:39.284: INFO: Pod "projected-volume-774e489b-c7ee-45f5-9603-db511dc05c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068327654s Apr 10 21:28:41.288: INFO: Pod "projected-volume-774e489b-c7ee-45f5-9603-db511dc05c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07271354s STEP: Saw pod success Apr 10 21:28:41.289: INFO: Pod "projected-volume-774e489b-c7ee-45f5-9603-db511dc05c4b" satisfied condition "success or failure" Apr 10 21:28:41.292: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-774e489b-c7ee-45f5-9603-db511dc05c4b container projected-all-volume-test: STEP: delete the pod Apr 10 21:28:41.361: INFO: Waiting for pod projected-volume-774e489b-c7ee-45f5-9603-db511dc05c4b to disappear Apr 10 21:28:41.375: INFO: Pod projected-volume-774e489b-c7ee-45f5-9603-db511dc05c4b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:28:41.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4516" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:28:41.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-rcd2 STEP: Creating a pod to test atomic-volume-subpath Apr 10 21:28:41.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rcd2" in namespace "subpath-7786" to be "success or failure" Apr 10 21:28:41.492: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Pending", Reason="", readiness=false. Elapsed: 47.947559ms Apr 10 21:28:43.496: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05273273s Apr 10 21:28:45.501: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 4.05717429s Apr 10 21:28:47.504: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 6.060808818s Apr 10 21:28:49.509: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 8.065528327s Apr 10 21:28:51.513: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 10.069425953s Apr 10 21:28:53.518: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 12.073946648s Apr 10 21:28:55.522: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 14.078155119s Apr 10 21:28:57.525: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 16.081716412s Apr 10 21:28:59.529: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 18.085599344s Apr 10 21:29:01.534: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 20.090236197s Apr 10 21:29:03.538: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Running", Reason="", readiness=true. Elapsed: 22.094563865s Apr 10 21:29:05.542: INFO: Pod "pod-subpath-test-secret-rcd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.098424449s STEP: Saw pod success Apr 10 21:29:05.542: INFO: Pod "pod-subpath-test-secret-rcd2" satisfied condition "success or failure" Apr 10 21:29:05.544: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-rcd2 container test-container-subpath-secret-rcd2: STEP: delete the pod Apr 10 21:29:05.583: INFO: Waiting for pod pod-subpath-test-secret-rcd2 to disappear Apr 10 21:29:05.592: INFO: Pod pod-subpath-test-secret-rcd2 no longer exists STEP: Deleting pod pod-subpath-test-secret-rcd2 Apr 10 21:29:05.592: INFO: Deleting pod "pod-subpath-test-secret-rcd2" in namespace "subpath-7786" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:29:05.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7786" for this suite. • [SLOW TEST:24.217 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":74,"skipped":1307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:29:05.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 10 21:29:05.678: INFO: Waiting up to 5m0s for pod "downward-api-fde0b6a8-12e2-412d-b525-de822f59b99d" in namespace "downward-api-304" to be "success or failure" Apr 10 21:29:05.738: INFO: Pod "downward-api-fde0b6a8-12e2-412d-b525-de822f59b99d": Phase="Pending", Reason="", readiness=false. Elapsed: 59.906739ms Apr 10 21:29:07.741: INFO: Pod "downward-api-fde0b6a8-12e2-412d-b525-de822f59b99d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063501631s Apr 10 21:29:09.746: INFO: Pod "downward-api-fde0b6a8-12e2-412d-b525-de822f59b99d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067944946s STEP: Saw pod success Apr 10 21:29:09.746: INFO: Pod "downward-api-fde0b6a8-12e2-412d-b525-de822f59b99d" satisfied condition "success or failure" Apr 10 21:29:09.750: INFO: Trying to get logs from node jerma-worker pod downward-api-fde0b6a8-12e2-412d-b525-de822f59b99d container dapi-container: STEP: delete the pod Apr 10 21:29:09.805: INFO: Waiting for pod downward-api-fde0b6a8-12e2-412d-b525-de822f59b99d to disappear Apr 10 21:29:09.814: INFO: Pod downward-api-fde0b6a8-12e2-412d-b525-de822f59b99d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:29:09.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-304" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1334,"failed":0} SSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:29:09.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 10 21:29:19.910: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:19.910: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:19.942838 7 log.go:172] (0xc001624bb0) (0xc0021e9360) Create stream I0410 21:29:19.942897 7 log.go:172] (0xc001624bb0) (0xc0021e9360) Stream added, broadcasting: 1 I0410 21:29:19.950477 7 log.go:172] (0xc001624bb0) Reply frame received for 1 I0410 21:29:19.950514 7 log.go:172] (0xc001624bb0) (0xc0011ed2c0) Create stream I0410 21:29:19.950528 7 log.go:172] (0xc001624bb0) (0xc0011ed2c0) Stream added, broadcasting: 3 I0410 21:29:19.951520 7 log.go:172] (0xc001624bb0) Reply frame received for 3 I0410 21:29:19.951543 7 log.go:172] (0xc001624bb0) (0xc00225f540) Create stream I0410 21:29:19.951552 7 log.go:172] (0xc001624bb0) (0xc00225f540) Stream added, broadcasting: 5 I0410 21:29:19.952180 7 log.go:172] (0xc001624bb0) Reply frame received for 5 I0410 21:29:20.037279 7 log.go:172] (0xc001624bb0) Data frame received for 5 I0410 21:29:20.037341 7 log.go:172] (0xc00225f540) (5) Data frame handling I0410 21:29:20.037395 7 log.go:172] (0xc001624bb0) Data frame received for 3 I0410 21:29:20.037427 7 log.go:172] (0xc0011ed2c0) (3) Data frame handling I0410 21:29:20.037455 7 log.go:172] (0xc0011ed2c0) (3) Data frame sent I0410 21:29:20.037473 7 log.go:172] (0xc001624bb0) Data frame received for 3 I0410 21:29:20.037490 7 log.go:172] (0xc0011ed2c0) (3) Data frame handling I0410 21:29:20.039283 7 log.go:172] (0xc001624bb0) Data frame received for 1 I0410 21:29:20.039306 7 log.go:172] (0xc0021e9360) (1) Data frame handling I0410 21:29:20.039333 7 log.go:172] (0xc0021e9360) (1) Data frame sent I0410 21:29:20.039363 7 log.go:172] (0xc001624bb0) (0xc0021e9360) Stream removed, broadcasting: 1 I0410 21:29:20.039388 7 log.go:172] (0xc001624bb0) Go away received I0410 21:29:20.039531 7 log.go:172] (0xc001624bb0) (0xc0021e9360) Stream removed, broadcasting: 1 I0410 21:29:20.039544 7 log.go:172] (0xc001624bb0) (0xc0011ed2c0) Stream removed, broadcasting: 3 I0410 21:29:20.039551 7 log.go:172] (0xc001624bb0) (0xc00225f540) Stream removed, broadcasting: 5 Apr 10 21:29:20.039: INFO: Exec stderr: "" Apr 10 21:29:20.039: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.039: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.068632 7 log.go:172] (0xc00151f8c0) (0xc0011ed720) Create stream I0410 21:29:20.068656 7 log.go:172] (0xc00151f8c0) (0xc0011ed720) Stream added, broadcasting: 1 I0410 21:29:20.071099 7 log.go:172] (0xc00151f8c0) Reply frame received for 1 I0410 21:29:20.071124 7 log.go:172] (0xc00151f8c0) (0xc0021e9400) Create stream I0410 21:29:20.071130 7 log.go:172] (0xc00151f8c0) (0xc0021e9400) Stream added, broadcasting: 3 I0410 21:29:20.072125 7 log.go:172] (0xc00151f8c0) Reply frame received for 3 I0410 21:29:20.072171 7 log.go:172] (0xc00151f8c0) (0xc00114d9a0) Create stream I0410 21:29:20.072188 7 log.go:172] (0xc00151f8c0) (0xc00114d9a0) Stream added, broadcasting: 5 I0410 21:29:20.073300 7 log.go:172] (0xc00151f8c0) Reply frame received for 5 I0410 21:29:20.128745 7 log.go:172] (0xc00151f8c0) Data frame received for 5 I0410 21:29:20.128787 7 log.go:172] (0xc00114d9a0) (5) Data frame handling I0410 21:29:20.128811 7 log.go:172] (0xc00151f8c0) Data frame received for 3 I0410 21:29:20.128825 7 log.go:172] (0xc0021e9400) (3) Data frame handling I0410 21:29:20.128840 7 log.go:172] (0xc0021e9400) (3) Data frame sent I0410 21:29:20.128854 7 log.go:172] (0xc00151f8c0) Data frame received for 3 I0410 21:29:20.128870 7 log.go:172] (0xc0021e9400) (3) Data frame handling I0410 21:29:20.130654 7 log.go:172] (0xc00151f8c0) Data frame received for 1 I0410 21:29:20.130700 7 log.go:172] (0xc0011ed720) (1) Data frame handling I0410 21:29:20.130729 7 log.go:172] (0xc0011ed720) (1) Data frame sent I0410 21:29:20.130748 7 log.go:172] (0xc00151f8c0) (0xc0011ed720) Stream removed, broadcasting: 1 I0410 21:29:20.130855 7 log.go:172] (0xc00151f8c0) Go away received I0410 21:29:20.130928 7 log.go:172] (0xc00151f8c0) (0xc0011ed720) Stream removed, broadcasting: 1 I0410 21:29:20.130953 7 log.go:172] (0xc00151f8c0) (0xc0021e9400) Stream removed, broadcasting: 3 I0410 21:29:20.130964 7 log.go:172] (0xc00151f8c0) (0xc00114d9a0) Stream removed, broadcasting: 5 Apr 10 21:29:20.130: INFO: Exec stderr: "" Apr 10 21:29:20.131: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.131: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.161651 7 log.go:172] (0xc00641fb80) (0xc00225f900) Create stream I0410 21:29:20.161688 7 log.go:172] (0xc00641fb80) (0xc00225f900) Stream added, broadcasting: 1 I0410 21:29:20.171902 7 log.go:172] (0xc00641fb80) Reply frame received for 1 I0410 21:29:20.171980 7 log.go:172] (0xc00641fb80) (0xc00114c000) Create stream I0410 21:29:20.172004 7 log.go:172] (0xc00641fb80) (0xc00114c000) Stream added, broadcasting: 3 I0410 21:29:20.172876 7 log.go:172] (0xc00641fb80) Reply frame received for 3 I0410 21:29:20.172898 7 log.go:172] (0xc00641fb80) (0xc0021e8000) Create stream I0410 21:29:20.172906 7 log.go:172] (0xc00641fb80) (0xc0021e8000) Stream added, broadcasting: 5 I0410 21:29:20.173804 7 log.go:172] (0xc00641fb80) Reply frame received for 5 I0410 21:29:20.241661 7 log.go:172] (0xc00641fb80) Data frame received for 3 I0410 21:29:20.241702 7 log.go:172] (0xc00114c000) (3) Data frame handling I0410 21:29:20.241730 7 log.go:172] (0xc00114c000) (3) Data frame sent I0410 21:29:20.241747 7 log.go:172] (0xc00641fb80) Data frame received for 3 I0410 21:29:20.241760 7 log.go:172] (0xc00114c000) (3) Data frame handling I0410 21:29:20.241848 7 log.go:172] (0xc00641fb80) Data frame received for 5 I0410 21:29:20.241876 7 log.go:172] (0xc0021e8000) (5) Data frame handling I0410 21:29:20.243665 7 log.go:172] (0xc00641fb80) Data frame received for 1 I0410 21:29:20.243697 7 log.go:172] (0xc00225f900) (1) Data frame handling I0410 21:29:20.243717 7 log.go:172] (0xc00225f900) (1) Data frame sent I0410 21:29:20.243743 7 log.go:172] (0xc00641fb80) (0xc00225f900) Stream removed, broadcasting: 1 I0410 21:29:20.243776 7 log.go:172] (0xc00641fb80) Go away received I0410 21:29:20.243917 7 log.go:172] (0xc00641fb80) (0xc00225f900) Stream removed, broadcasting: 1 I0410 21:29:20.243954 7 log.go:172] (0xc00641fb80) (0xc00114c000) Stream removed, broadcasting: 3 I0410 21:29:20.243969 7 log.go:172] (0xc00641fb80) (0xc0021e8000) Stream removed, broadcasting: 5 Apr 10 21:29:20.243: INFO: Exec stderr: "" Apr 10 21:29:20.244: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.244: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.274692 7 log.go:172] (0xc00195a420) (0xc0017003c0) Create stream I0410 21:29:20.274724 7 log.go:172] (0xc00195a420) (0xc0017003c0) Stream added, broadcasting: 1 I0410 21:29:20.280413 7 log.go:172] (0xc00195a420) Reply frame received for 1 I0410 21:29:20.280478 7 log.go:172] (0xc00195a420) (0xc00114c1e0) Create stream I0410 21:29:20.280499 7 log.go:172] (0xc00195a420) (0xc00114c1e0) Stream added, broadcasting: 3 I0410 21:29:20.285888 7 log.go:172] (0xc00195a420) Reply frame received for 3 I0410 21:29:20.285922 7 log.go:172] (0xc00195a420) (0xc001d12140) Create stream I0410 21:29:20.285937 7 log.go:172] (0xc00195a420) (0xc001d12140) Stream added, broadcasting: 5 I0410 21:29:20.286854 7 log.go:172] (0xc00195a420) Reply frame received for 5 I0410 21:29:20.349233 7 log.go:172] (0xc00195a420) Data frame received for 3 I0410 21:29:20.349270 7 log.go:172] (0xc00114c1e0) (3) Data frame handling I0410 21:29:20.349284 7 log.go:172] (0xc00114c1e0) (3) Data frame sent I0410 21:29:20.349295 7 log.go:172] (0xc00195a420) Data frame received for 3 I0410 21:29:20.349309 7 log.go:172] (0xc00114c1e0) (3) Data frame handling I0410 21:29:20.349333 7 log.go:172] (0xc00195a420) Data frame received for 5 I0410 21:29:20.349351 7 log.go:172] (0xc001d12140) (5) Data frame handling I0410 21:29:20.350651 7 log.go:172] (0xc00195a420) Data frame received for 1 I0410 21:29:20.350669 7 log.go:172] (0xc0017003c0) (1) Data frame handling I0410 21:29:20.350677 7 log.go:172] (0xc0017003c0) (1) Data frame sent I0410 21:29:20.350692 7 log.go:172] (0xc00195a420) (0xc0017003c0) Stream removed, broadcasting: 1 I0410 21:29:20.350704 7 log.go:172] (0xc00195a420) Go away received I0410 21:29:20.350843 7 log.go:172] (0xc00195a420) (0xc0017003c0) Stream removed, broadcasting: 1 I0410 21:29:20.350859 7 log.go:172] (0xc00195a420) (0xc00114c1e0) Stream removed, broadcasting: 3 I0410 21:29:20.350867 7 log.go:172] (0xc00195a420) (0xc001d12140) Stream removed, broadcasting: 5 Apr 10 21:29:20.350: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 10 21:29:20.350: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.350: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.383925 7 log.go:172] (0xc00169c000) (0xc0021e8280) Create stream I0410 21:29:20.383948 7 log.go:172] (0xc00169c000) (0xc0021e8280) Stream added, broadcasting: 1 I0410 21:29:20.386235 7 log.go:172] (0xc00169c000) Reply frame received for 1 I0410 21:29:20.386277 7 log.go:172] (0xc00169c000) (0xc00163c1e0) Create stream I0410 21:29:20.386293 7 log.go:172] (0xc00169c000) (0xc00163c1e0) Stream added, broadcasting: 3 I0410 21:29:20.387257 7 log.go:172] (0xc00169c000) Reply frame received for 3 I0410 21:29:20.387314 7 log.go:172] (0xc00169c000) (0xc00114c320) Create stream I0410 21:29:20.387338 7 log.go:172] (0xc00169c000) (0xc00114c320) Stream added, broadcasting: 5 I0410 21:29:20.388321 7 log.go:172] (0xc00169c000) Reply frame received for 5 I0410 21:29:20.457949 7 log.go:172] (0xc00169c000) Data frame received for 3 I0410 21:29:20.458001 7 log.go:172] (0xc00163c1e0) (3) Data frame handling I0410 21:29:20.458030 7 log.go:172] (0xc00163c1e0) (3) Data frame sent I0410 21:29:20.458062 7 log.go:172] (0xc00169c000) Data frame received for 3 I0410 21:29:20.458084 7 log.go:172] (0xc00163c1e0) (3) Data frame handling I0410 21:29:20.458109 7 log.go:172] (0xc00169c000) Data frame received for 5 I0410 21:29:20.458184 7 log.go:172] (0xc00114c320) (5) Data frame handling I0410 21:29:20.459573 7 log.go:172] (0xc00169c000) Data frame received for 1 I0410 21:29:20.459652 7 log.go:172] (0xc0021e8280) (1) Data frame handling I0410 21:29:20.459681 7 log.go:172] (0xc0021e8280) (1) Data frame sent I0410 21:29:20.459696 7 log.go:172] (0xc00169c000) (0xc0021e8280) Stream removed, broadcasting: 1 I0410 21:29:20.459734 7 log.go:172] (0xc00169c000) Go away received I0410 21:29:20.459882 7 log.go:172] (0xc00169c000) (0xc0021e8280) Stream removed, broadcasting: 1 I0410 21:29:20.459913 7 log.go:172] (0xc00169c000) (0xc00163c1e0) Stream removed, broadcasting: 3 I0410 21:29:20.459930 7 log.go:172] (0xc00169c000) (0xc00114c320) Stream removed, broadcasting: 5 Apr 10 21:29:20.459: INFO: Exec stderr: "" Apr 10 21:29:20.460: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.460: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.494325 7 log.go:172] (0xc0026a44d0) (0xc00163c460) Create stream I0410 21:29:20.494348 7 log.go:172] (0xc0026a44d0) (0xc00163c460) Stream added, broadcasting: 1 I0410 21:29:20.496266 7 log.go:172] (0xc0026a44d0) Reply frame received for 1 I0410 21:29:20.496318 7 log.go:172] (0xc0026a44d0) (0xc00114c460) Create stream I0410 21:29:20.496339 7 log.go:172] (0xc0026a44d0) (0xc00114c460) Stream added, broadcasting: 3 I0410 21:29:20.497763 7 log.go:172] (0xc0026a44d0) Reply frame received for 3 I0410 21:29:20.497810 7 log.go:172] (0xc0026a44d0) (0xc001700460) Create stream I0410 21:29:20.497825 7 log.go:172] (0xc0026a44d0) (0xc001700460) Stream added, broadcasting: 5 I0410 21:29:20.498898 7 log.go:172] (0xc0026a44d0) Reply frame received for 5 I0410 21:29:20.557589 7 log.go:172] (0xc0026a44d0) Data frame received for 3 I0410 21:29:20.557629 7 log.go:172] (0xc00114c460) (3) Data frame handling I0410 21:29:20.557666 7 log.go:172] (0xc00114c460) (3) Data frame sent I0410 21:29:20.557689 7 log.go:172] (0xc0026a44d0) Data frame received for 3 I0410 21:29:20.557710 7 log.go:172] (0xc00114c460) (3) Data frame handling I0410 21:29:20.557735 7 log.go:172] (0xc0026a44d0) Data frame received for 5 I0410 21:29:20.557760 7 log.go:172] (0xc001700460) (5) Data frame handling I0410 21:29:20.559174 7 log.go:172] (0xc0026a44d0) Data frame received for 1 I0410 21:29:20.559200 7 log.go:172] (0xc00163c460) (1) Data frame handling I0410 21:29:20.559224 7 log.go:172] (0xc00163c460) (1) Data frame sent I0410 21:29:20.559365 7 log.go:172] (0xc0026a44d0) (0xc00163c460) Stream removed, broadcasting: 1 I0410 21:29:20.559407 7 log.go:172] (0xc0026a44d0) Go away received I0410 21:29:20.559452 7 log.go:172] (0xc0026a44d0) (0xc00163c460) Stream removed, broadcasting: 1 I0410 21:29:20.559501 7 log.go:172] (0xc0026a44d0) (0xc00114c460) Stream removed, broadcasting: 3 I0410 21:29:20.559524 7 log.go:172] (0xc0026a44d0) (0xc001700460) Stream removed, broadcasting: 5 Apr 10 21:29:20.559: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 10 21:29:20.559: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.559: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.599310 7 log.go:172] (0xc00169c6e0) (0xc0021e8780) Create stream I0410 21:29:20.599340 7 log.go:172] (0xc00169c6e0) (0xc0021e8780) Stream added, broadcasting: 1 I0410 21:29:20.601766 7 log.go:172] (0xc00169c6e0) Reply frame received for 1 I0410 21:29:20.601794 7 log.go:172] (0xc00169c6e0) (0xc00163c500) Create stream I0410 21:29:20.601805 7 log.go:172] (0xc00169c6e0) (0xc00163c500) Stream added, broadcasting: 3 I0410 21:29:20.602854 7 log.go:172] (0xc00169c6e0) Reply frame received for 3 I0410 21:29:20.602912 7 log.go:172] (0xc00169c6e0) (0xc00163c820) Create stream I0410 21:29:20.602939 7 log.go:172] (0xc00169c6e0) (0xc00163c820) Stream added, broadcasting: 5 I0410 21:29:20.603937 7 log.go:172] (0xc00169c6e0) Reply frame received for 5 I0410 21:29:20.676592 7 log.go:172] (0xc00169c6e0) Data frame received for 5 I0410 21:29:20.676643 7 log.go:172] (0xc00163c820) (5) Data frame handling I0410 21:29:20.676708 7 log.go:172] (0xc00169c6e0) Data frame received for 3 I0410 21:29:20.676743 7 log.go:172] (0xc00163c500) (3) Data frame handling I0410 21:29:20.676776 7 log.go:172] (0xc00163c500) (3) Data frame sent I0410 21:29:20.676792 7 log.go:172] (0xc00169c6e0) Data frame received for 3 I0410 21:29:20.676812 7 log.go:172] (0xc00163c500) (3) Data frame handling I0410 21:29:20.678265 7 log.go:172] (0xc00169c6e0) Data frame received for 1 I0410 21:29:20.678300 7 log.go:172] (0xc0021e8780) (1) Data frame handling I0410 21:29:20.678320 7 log.go:172] (0xc0021e8780) (1) Data frame sent I0410 21:29:20.678348 7 log.go:172] (0xc00169c6e0) (0xc0021e8780) Stream removed, broadcasting: 1 I0410 21:29:20.678399 7 log.go:172] (0xc00169c6e0) Go away received I0410 21:29:20.678526 7 log.go:172] (0xc00169c6e0) (0xc0021e8780) Stream removed, broadcasting: 1 I0410 21:29:20.678554 7 log.go:172] (0xc00169c6e0) (0xc00163c500) Stream removed, broadcasting: 3 I0410 21:29:20.678578 7 log.go:172] (0xc00169c6e0) (0xc00163c820) Stream removed, broadcasting: 5 Apr 10 21:29:20.678: INFO: Exec stderr: "" Apr 10 21:29:20.678: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.678: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.710103 7 log.go:172] (0xc0026a4b00) (0xc00163cbe0) Create stream I0410 21:29:20.710130 7 log.go:172] (0xc0026a4b00) (0xc00163cbe0) Stream added, broadcasting: 1 I0410 21:29:20.712753 7 log.go:172] (0xc0026a4b00) Reply frame received for 1 I0410 21:29:20.712794 7 log.go:172] (0xc0026a4b00) (0xc001700500) Create stream I0410 21:29:20.712810 7 log.go:172] (0xc0026a4b00) (0xc001700500) Stream added, broadcasting: 3 I0410 21:29:20.713984 7 log.go:172] (0xc0026a4b00) Reply frame received for 3 I0410 21:29:20.714023 7 log.go:172] (0xc0026a4b00) (0xc0021e8820) Create stream I0410 21:29:20.714039 7 log.go:172] (0xc0026a4b00) (0xc0021e8820) Stream added, broadcasting: 5 I0410 21:29:20.714923 7 log.go:172] (0xc0026a4b00) Reply frame received for 5 I0410 21:29:20.774596 7 log.go:172] (0xc0026a4b00) Data frame received for 5 I0410 21:29:20.774639 7 log.go:172] (0xc0021e8820) (5) Data frame handling I0410 21:29:20.774681 7 log.go:172] (0xc0026a4b00) Data frame received for 3 I0410 21:29:20.774701 7 log.go:172] (0xc001700500) (3) Data frame handling I0410 21:29:20.774715 7 log.go:172] (0xc001700500) (3) Data frame sent I0410 21:29:20.774738 7 log.go:172] (0xc0026a4b00) Data frame received for 3 I0410 21:29:20.774752 7 log.go:172] (0xc001700500) (3) Data frame handling I0410 21:29:20.776317 7 log.go:172] (0xc0026a4b00) Data frame received for 1 I0410 21:29:20.776354 7 log.go:172] (0xc00163cbe0) (1) Data frame handling I0410 21:29:20.776376 7 log.go:172] (0xc00163cbe0) (1) Data frame sent I0410 21:29:20.776395 7 log.go:172] (0xc0026a4b00) (0xc00163cbe0) Stream removed, broadcasting: 1 I0410 21:29:20.776418 7 log.go:172] (0xc0026a4b00) Go away received I0410 21:29:20.776612 7 log.go:172] (0xc0026a4b00) (0xc00163cbe0) Stream removed, broadcasting: 1 I0410 21:29:20.776639 7 log.go:172] (0xc0026a4b00) (0xc001700500) Stream removed, broadcasting: 3 I0410 21:29:20.776655 7 log.go:172] (0xc0026a4b00) (0xc0021e8820) Stream removed, broadcasting: 5 Apr 10 21:29:20.776: INFO: Exec stderr: "" Apr 10 21:29:20.776: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.776: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.818474 7 log.go:172] (0xc00169cd10) (0xc0021e8be0) Create stream I0410 21:29:20.818496 7 log.go:172] (0xc00169cd10) (0xc0021e8be0) Stream added, broadcasting: 1 I0410 21:29:20.821233 7 log.go:172] (0xc00169cd10) Reply frame received for 1 I0410 21:29:20.821265 7 log.go:172] (0xc00169cd10) (0xc0017005a0) Create stream I0410 21:29:20.821274 7 log.go:172] (0xc00169cd10) (0xc0017005a0) Stream added, broadcasting: 3 I0410 21:29:20.822211 7 log.go:172] (0xc00169cd10) Reply frame received for 3 I0410 21:29:20.822235 7 log.go:172] (0xc00169cd10) (0xc00163cd20) Create stream I0410 21:29:20.822244 7 log.go:172] (0xc00169cd10) (0xc00163cd20) Stream added, broadcasting: 5 I0410 21:29:20.823376 7 log.go:172] (0xc00169cd10) Reply frame received for 5 I0410 21:29:20.888040 7 log.go:172] (0xc00169cd10) Data frame received for 5 I0410 21:29:20.888105 7 log.go:172] (0xc00163cd20) (5) Data frame handling I0410 21:29:20.888177 7 log.go:172] (0xc00169cd10) Data frame received for 3 I0410 21:29:20.888207 7 log.go:172] (0xc0017005a0) (3) Data frame handling I0410 21:29:20.888256 7 log.go:172] (0xc0017005a0) (3) Data frame sent I0410 21:29:20.888296 7 log.go:172] (0xc00169cd10) Data frame received for 3 I0410 21:29:20.888326 7 log.go:172] (0xc0017005a0) (3) Data frame handling I0410 21:29:20.889774 7 log.go:172] (0xc00169cd10) Data frame received for 1 I0410 21:29:20.889819 7 log.go:172] (0xc0021e8be0) (1) Data frame handling I0410 21:29:20.889853 7 log.go:172] (0xc0021e8be0) (1) Data frame sent I0410 21:29:20.889888 7 log.go:172] (0xc00169cd10) (0xc0021e8be0) Stream removed, broadcasting: 1 I0410 21:29:20.889923 7 log.go:172] (0xc00169cd10) Go away received I0410 21:29:20.889992 7 log.go:172] (0xc00169cd10) (0xc0021e8be0) Stream removed, broadcasting: 1 I0410 21:29:20.890008 7 log.go:172] (0xc00169cd10) (0xc0017005a0) Stream removed, broadcasting: 3 I0410 21:29:20.890015 7 log.go:172] (0xc00169cd10) (0xc00163cd20) Stream removed, broadcasting: 5 Apr 10 21:29:20.890: INFO: Exec stderr: "" Apr 10 21:29:20.890: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5104 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 21:29:20.890: INFO: >>> kubeConfig: /root/.kube/config I0410 21:29:20.925076 7 log.go:172] (0xc0026a4e70) (0xc00163cf00) Create stream I0410 21:29:20.925105 7 log.go:172] (0xc0026a4e70) (0xc00163cf00) Stream added, broadcasting: 1 I0410 21:29:20.927817 7 log.go:172] (0xc0026a4e70) Reply frame received for 1 I0410 21:29:20.927859 7 log.go:172] (0xc0026a4e70) (0xc0017008c0) Create stream I0410 21:29:20.927874 7 log.go:172] (0xc0026a4e70) (0xc0017008c0) Stream added, broadcasting: 3 I0410 21:29:20.928837 7 log.go:172] (0xc0026a4e70) Reply frame received for 3 I0410 21:29:20.928859 7 log.go:172] (0xc0026a4e70) (0xc00163d0e0) Create stream I0410 21:29:20.928869 7 log.go:172] (0xc0026a4e70) (0xc00163d0e0) Stream added, broadcasting: 5 I0410 21:29:20.929861 7 log.go:172] (0xc0026a4e70) Reply frame received for 5 I0410 21:29:21.006605 7 log.go:172] (0xc0026a4e70) Data frame received for 3 I0410 21:29:21.006631 7 log.go:172] (0xc0017008c0) (3) Data frame handling I0410 21:29:21.006642 7 log.go:172] (0xc0017008c0) (3) Data frame sent I0410 21:29:21.006658 7 log.go:172] (0xc0026a4e70) Data frame received for 5 I0410 21:29:21.006663 7 log.go:172] (0xc00163d0e0) (5) Data frame handling I0410 21:29:21.006681 7 log.go:172] (0xc0026a4e70) Data frame received for 3 I0410 21:29:21.006701 7 log.go:172] (0xc0017008c0) (3) Data frame handling I0410 21:29:21.008156 7 log.go:172] (0xc0026a4e70) Data frame received for 1 I0410 21:29:21.008184 7 log.go:172] (0xc00163cf00) (1) Data frame handling I0410 21:29:21.008201 7 log.go:172] (0xc00163cf00) (1) Data frame sent I0410 21:29:21.008215 7 log.go:172] (0xc0026a4e70) (0xc00163cf00) Stream removed, broadcasting: 1 I0410 21:29:21.008232 7 log.go:172] (0xc0026a4e70) Go away received I0410 21:29:21.008314 7 log.go:172] (0xc0026a4e70) (0xc00163cf00) Stream removed, broadcasting: 1 I0410 21:29:21.008327 7 log.go:172] (0xc0026a4e70) (0xc0017008c0) Stream removed, broadcasting: 3 I0410 21:29:21.008333 7 log.go:172] (0xc0026a4e70) (0xc00163d0e0) Stream removed, broadcasting: 5 Apr 10 21:29:21.008: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:29:21.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5104" for this suite. • [SLOW TEST:11.195 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1338,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:29:21.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:29:52.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3153" for this suite. STEP: Destroying namespace "nsdeletetest-4043" for this suite. Apr 10 21:29:52.276: INFO: Namespace nsdeletetest-4043 was already deleted STEP: Destroying namespace "nsdeletetest-6833" for this suite. • [SLOW TEST:31.262 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":77,"skipped":1341,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:29:52.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 10 21:29:52.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 10 21:29:52.417: INFO: stderr: "" Apr 10 21:29:52.417: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:29:52.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9961" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":78,"skipped":1343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:29:52.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1071 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1071 STEP: creating replication controller externalsvc in namespace services-1071 I0410 21:29:52.599638 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1071, replica count: 2 I0410 21:29:55.650165 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 21:29:58.650405 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 10 21:29:58.709: INFO: Creating new exec pod Apr 10 21:30:02.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1071 execpodjwk66 -- /bin/sh -x -c nslookup nodeport-service' Apr 10 21:30:02.970: INFO: stderr: "I0410 21:30:02.861546 918 log.go:172] (0xc000a00000) (0xc0009e6000) Create stream\nI0410 21:30:02.861598 918 log.go:172] (0xc000a00000) (0xc0009e6000) Stream added, broadcasting: 1\nI0410 21:30:02.864024 918 log.go:172] (0xc000a00000) Reply frame received for 1\nI0410 21:30:02.864069 918 log.go:172] (0xc000a00000) (0xc0009e60a0) Create stream\nI0410 21:30:02.864083 918 log.go:172] (0xc000a00000) (0xc0009e60a0) Stream added, broadcasting: 3\nI0410 21:30:02.865052 918 log.go:172] (0xc000a00000) Reply frame received for 3\nI0410 21:30:02.865255 918 log.go:172] (0xc000a00000) (0xc0009a0000) Create stream\nI0410 21:30:02.865297 918 log.go:172] (0xc000a00000) (0xc0009a0000) Stream added, broadcasting: 5\nI0410 21:30:02.866091 918 log.go:172] (0xc000a00000) Reply frame received for 5\nI0410 21:30:02.953820 918 log.go:172] (0xc000a00000) Data frame received for 5\nI0410 21:30:02.953851 918 log.go:172] (0xc0009a0000) (5) Data frame handling\nI0410 21:30:02.953866 918 log.go:172] (0xc0009a0000) (5) Data frame sent\n+ nslookup nodeport-service\nI0410 21:30:02.960623 918 log.go:172] (0xc000a00000) Data frame received for 3\nI0410 21:30:02.960655 918 log.go:172] (0xc0009e60a0) (3) Data frame handling\nI0410 21:30:02.960679 918 log.go:172] (0xc0009e60a0) (3) Data frame sent\nI0410 21:30:02.961566 918 log.go:172] (0xc000a00000) Data frame received for 3\nI0410 21:30:02.961613 918 log.go:172] (0xc0009e60a0) (3) Data frame handling\nI0410 21:30:02.961638 918 log.go:172] (0xc0009e60a0) (3) Data frame sent\nI0410 21:30:02.962021 918 log.go:172] (0xc000a00000) Data frame received for 5\nI0410 21:30:02.962051 918 log.go:172] (0xc0009a0000) (5) Data frame handling\nI0410 21:30:02.962129 918 log.go:172] (0xc000a00000) Data frame received for 3\nI0410 21:30:02.962146 918 log.go:172] (0xc0009e60a0) (3) Data frame handling\nI0410 21:30:02.964084 918 log.go:172] (0xc000a00000) Data frame received for 1\nI0410 21:30:02.964130 918 log.go:172] (0xc0009e6000) (1) Data frame handling\nI0410 21:30:02.964165 918 log.go:172] (0xc0009e6000) (1) Data frame sent\nI0410 21:30:02.964192 918 log.go:172] (0xc000a00000) (0xc0009e6000) Stream removed, broadcasting: 1\nI0410 21:30:02.964226 918 log.go:172] (0xc000a00000) Go away received\nI0410 21:30:02.964626 918 log.go:172] (0xc000a00000) (0xc0009e6000) Stream removed, broadcasting: 1\nI0410 21:30:02.964653 918 log.go:172] (0xc000a00000) (0xc0009e60a0) Stream removed, broadcasting: 3\nI0410 21:30:02.964667 918 log.go:172] (0xc000a00000) (0xc0009a0000) Stream removed, broadcasting: 5\n" Apr 10 21:30:02.970: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1071.svc.cluster.local\tcanonical name = externalsvc.services-1071.svc.cluster.local.\nName:\texternalsvc.services-1071.svc.cluster.local\nAddress: 10.110.212.115\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1071, will wait for the garbage collector to delete the pods Apr 10 21:30:03.030: INFO: Deleting ReplicationController externalsvc took: 5.850896ms Apr 10 21:30:03.430: INFO: Terminating ReplicationController externalsvc pods took: 400.271479ms Apr 10 21:30:19.256: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:30:19.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1071" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.905 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":79,"skipped":1441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:30:19.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:30:19.422: INFO: Creating ReplicaSet my-hostname-basic-0fb8e787-7875-4426-8bb5-ca2a3f42f632 Apr 10 21:30:19.432: INFO: Pod name my-hostname-basic-0fb8e787-7875-4426-8bb5-ca2a3f42f632: Found 0 pods out of 1 Apr 10 21:30:24.455: INFO: Pod name my-hostname-basic-0fb8e787-7875-4426-8bb5-ca2a3f42f632: Found 1 pods out of 1 Apr 10 21:30:24.455: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0fb8e787-7875-4426-8bb5-ca2a3f42f632" is running Apr 10 21:30:24.469: INFO: Pod "my-hostname-basic-0fb8e787-7875-4426-8bb5-ca2a3f42f632-pqd4k" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 21:30:19 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 21:30:22 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 21:30:22 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-10 21:30:19 +0000 UTC Reason: Message:}]) Apr 10 21:30:24.469: INFO: Trying to dial the pod Apr 10 21:30:29.481: INFO: Controller my-hostname-basic-0fb8e787-7875-4426-8bb5-ca2a3f42f632: Got expected result from replica 1 [my-hostname-basic-0fb8e787-7875-4426-8bb5-ca2a3f42f632-pqd4k]: "my-hostname-basic-0fb8e787-7875-4426-8bb5-ca2a3f42f632-pqd4k", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:30:29.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7157" for this suite. • [SLOW TEST:10.156 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":80,"skipped":1471,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:30:29.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:30:33.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5123" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1478,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:30:33.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-a7031762-cee4-471d-92e9-bab61574956d STEP: Creating a pod to test consume secrets Apr 10 21:30:33.651: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-40a3d18d-92f3-4cb3-bf81-c9216a2ce954" in namespace "projected-9221" to be "success or failure" Apr 10 21:30:33.655: INFO: Pod "pod-projected-secrets-40a3d18d-92f3-4cb3-bf81-c9216a2ce954": Phase="Pending", Reason="", readiness=false. Elapsed: 3.365643ms Apr 10 21:30:35.667: INFO: Pod "pod-projected-secrets-40a3d18d-92f3-4cb3-bf81-c9216a2ce954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015967121s Apr 10 21:30:37.671: INFO: Pod "pod-projected-secrets-40a3d18d-92f3-4cb3-bf81-c9216a2ce954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019857225s STEP: Saw pod success Apr 10 21:30:37.671: INFO: Pod "pod-projected-secrets-40a3d18d-92f3-4cb3-bf81-c9216a2ce954" satisfied condition "success or failure" Apr 10 21:30:37.674: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-40a3d18d-92f3-4cb3-bf81-c9216a2ce954 container projected-secret-volume-test: STEP: delete the pod Apr 10 21:30:37.692: INFO: Waiting for pod pod-projected-secrets-40a3d18d-92f3-4cb3-bf81-c9216a2ce954 to disappear Apr 10 21:30:37.709: INFO: Pod pod-projected-secrets-40a3d18d-92f3-4cb3-bf81-c9216a2ce954 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:30:37.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9221" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:30:37.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6794/configmap-test-68e62610-473d-4486-bb67-45377a17851b STEP: Creating a pod to test consume configMaps Apr 10 21:30:37.819: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c571d00-8204-45ca-b923-98fe4b7fa43b" in namespace "configmap-6794" to be "success or failure" Apr 10 21:30:37.823: INFO: Pod "pod-configmaps-4c571d00-8204-45ca-b923-98fe4b7fa43b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095222ms Apr 10 21:30:39.826: INFO: Pod "pod-configmaps-4c571d00-8204-45ca-b923-98fe4b7fa43b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007001196s Apr 10 21:30:41.831: INFO: Pod "pod-configmaps-4c571d00-8204-45ca-b923-98fe4b7fa43b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011272531s STEP: Saw pod success Apr 10 21:30:41.831: INFO: Pod "pod-configmaps-4c571d00-8204-45ca-b923-98fe4b7fa43b" satisfied condition "success or failure" Apr 10 21:30:41.834: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4c571d00-8204-45ca-b923-98fe4b7fa43b container env-test: STEP: delete the pod Apr 10 21:30:41.869: INFO: Waiting for pod pod-configmaps-4c571d00-8204-45ca-b923-98fe4b7fa43b to disappear Apr 10 21:30:41.883: INFO: Pod pod-configmaps-4c571d00-8204-45ca-b923-98fe4b7fa43b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:30:41.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6794" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:30:41.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 10 21:30:41.956: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:30:58.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9179" for this suite. • [SLOW TEST:16.737 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":84,"skipped":1568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:30:58.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:30:59.263: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:31:01.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151059, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151059, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151059, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151059, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:31:04.328: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:31:04.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2317" for this suite. STEP: Destroying namespace "webhook-2317-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.029 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":85,"skipped":1625,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:31:04.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:31:08.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9742" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1626,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:31:08.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6580 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6580 STEP: Creating statefulset with conflicting port in namespace statefulset-6580 STEP: Waiting until pod test-pod will start running in namespace statefulset-6580 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6580 Apr 10 21:31:12.895: INFO: Observed stateful pod in namespace: statefulset-6580, name: ss-0, uid: f52babf6-e9a4-4d42-bf54-ce76f6fe74de, status phase: Pending. Waiting for statefulset controller to delete. Apr 10 21:31:13.231: INFO: Observed stateful pod in namespace: statefulset-6580, name: ss-0, uid: f52babf6-e9a4-4d42-bf54-ce76f6fe74de, status phase: Failed. Waiting for statefulset controller to delete. Apr 10 21:31:13.238: INFO: Observed stateful pod in namespace: statefulset-6580, name: ss-0, uid: f52babf6-e9a4-4d42-bf54-ce76f6fe74de, status phase: Failed. Waiting for statefulset controller to delete. Apr 10 21:31:13.267: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6580 STEP: Removing pod with conflicting port in namespace statefulset-6580 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6580 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 10 21:31:17.359: INFO: Deleting all statefulset in ns statefulset-6580 Apr 10 21:31:17.362: INFO: Scaling statefulset ss to 0 Apr 10 21:31:27.378: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:31:27.381: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:31:27.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6580" for this suite. • [SLOW TEST:18.643 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":87,"skipped":1626,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:31:27.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 10 21:31:27.454: INFO: Waiting up to 5m0s for pod "pod-e50d43b8-0ac0-4875-8b03-e73d4afe8bcd" in namespace "emptydir-311" to be "success or failure" Apr 10 21:31:27.458: INFO: Pod "pod-e50d43b8-0ac0-4875-8b03-e73d4afe8bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13092ms Apr 10 21:31:29.461: INFO: Pod "pod-e50d43b8-0ac0-4875-8b03-e73d4afe8bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006744826s Apr 10 21:31:31.471: INFO: Pod "pod-e50d43b8-0ac0-4875-8b03-e73d4afe8bcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017277637s STEP: Saw pod success Apr 10 21:31:31.471: INFO: Pod "pod-e50d43b8-0ac0-4875-8b03-e73d4afe8bcd" satisfied condition "success or failure" Apr 10 21:31:31.495: INFO: Trying to get logs from node jerma-worker pod pod-e50d43b8-0ac0-4875-8b03-e73d4afe8bcd container test-container: STEP: delete the pod Apr 10 21:31:31.651: INFO: Waiting for pod pod-e50d43b8-0ac0-4875-8b03-e73d4afe8bcd to disappear Apr 10 21:31:31.657: INFO: Pod pod-e50d43b8-0ac0-4875-8b03-e73d4afe8bcd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:31:31.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-311" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1626,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:31:31.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 10 21:31:36.122: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:31:36.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3131" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1643,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:31:36.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-tpwh STEP: Creating a pod to test atomic-volume-subpath Apr 10 21:31:36.240: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-tpwh" in namespace "subpath-7789" to be "success or failure" Apr 10 21:31:36.326: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Pending", Reason="", readiness=false. Elapsed: 85.938492ms Apr 10 21:31:38.330: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090072314s Apr 10 21:31:40.335: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 4.094731089s Apr 10 21:31:42.340: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 6.099333277s Apr 10 21:31:44.344: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 8.10395003s Apr 10 21:31:46.348: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 10.10773768s Apr 10 21:31:48.352: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 12.111619311s Apr 10 21:31:50.357: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 14.116153907s Apr 10 21:31:52.360: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 16.119975929s Apr 10 21:31:54.365: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 18.124637131s Apr 10 21:31:56.369: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 20.128684152s Apr 10 21:31:58.373: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Running", Reason="", readiness=true. Elapsed: 22.132643598s Apr 10 21:32:00.376: INFO: Pod "pod-subpath-test-downwardapi-tpwh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.135567326s STEP: Saw pod success Apr 10 21:32:00.376: INFO: Pod "pod-subpath-test-downwardapi-tpwh" satisfied condition "success or failure" Apr 10 21:32:00.379: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-tpwh container test-container-subpath-downwardapi-tpwh: STEP: delete the pod Apr 10 21:32:00.394: INFO: Waiting for pod pod-subpath-test-downwardapi-tpwh to disappear Apr 10 21:32:00.399: INFO: Pod pod-subpath-test-downwardapi-tpwh no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-tpwh Apr 10 21:32:00.399: INFO: Deleting pod "pod-subpath-test-downwardapi-tpwh" in namespace "subpath-7789" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:32:00.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7789" for this suite. • [SLOW TEST:24.242 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":90,"skipped":1653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:32:00.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Apr 10 21:32:00.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3865' Apr 10 21:32:00.787: INFO: stderr: "" Apr 10 21:32:00.787: INFO: stdout: "pod/pause created\n" Apr 10 21:32:00.787: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 10 21:32:00.787: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3865" to be "running and ready" Apr 10 21:32:00.799: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.584609ms Apr 10 21:32:02.855: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067300204s Apr 10 21:32:04.859: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.071259968s Apr 10 21:32:04.859: INFO: Pod "pause" satisfied condition "running and ready" Apr 10 21:32:04.859: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 10 21:32:04.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3865' Apr 10 21:32:04.954: INFO: stderr: "" Apr 10 21:32:04.954: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 10 21:32:04.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3865' Apr 10 21:32:05.039: INFO: stderr: "" Apr 10 21:32:05.039: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 10 21:32:05.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3865' Apr 10 21:32:05.130: INFO: stderr: "" Apr 10 21:32:05.130: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 10 21:32:05.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3865' Apr 10 21:32:05.230: INFO: stderr: "" Apr 10 21:32:05.230: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Apr 10 21:32:05.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3865' Apr 10 21:32:05.352: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 21:32:05.352: INFO: stdout: "pod \"pause\" force deleted\n" Apr 10 21:32:05.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3865' Apr 10 21:32:05.446: INFO: stderr: "No resources found in kubectl-3865 namespace.\n" Apr 10 21:32:05.446: INFO: stdout: "" Apr 10 21:32:05.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3865 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:32:05.591: INFO: stderr: "" Apr 10 21:32:05.591: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:32:05.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3865" for this suite. • [SLOW TEST:5.189 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":91,"skipped":1681,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:32:05.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:32:05.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-497" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":92,"skipped":1688,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:32:05.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 10 21:32:10.130: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:32:10.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4781" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1689,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:32:10.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 10 21:32:14.261: INFO: &Pod{ObjectMeta:{send-events-c10f0233-cdec-4d64-a691-08e0ca37f444 events-9374 /api/v1/namespaces/events-9374/pods/send-events-c10f0233-cdec-4d64-a691-08e0ca37f444 f59f217a-8bd8-4e6e-88e1-92e12e0e8fd3 7040994 0 2020-04-10 21:32:10 +0000 UTC map[name:foo time:241430185] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5vmsg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5vmsg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5vmsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:32:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:32:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:32:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.185,StartTime:2020-04-10 21:32:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:32:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://65b946d53b92aa7fd3f5a3563abce92fff4c96a5d49ea9b7b0b839ea2137b322,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 10 21:32:16.265: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 10 21:32:18.270: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:32:18.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9374" for this suite. • [SLOW TEST:8.131 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":94,"skipped":1701,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:32:18.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 10 21:32:18.379: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 10 21:32:28.799: INFO: >>> kubeConfig: /root/.kube/config Apr 10 21:32:31.786: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:32:42.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6987" for this suite. • [SLOW TEST:24.118 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":95,"skipped":1703,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:32:42.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 10 21:32:42.517: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:42.521: INFO: Number of nodes with available pods: 0 Apr 10 21:32:42.521: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:32:43.525: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:43.528: INFO: Number of nodes with available pods: 0 Apr 10 21:32:43.528: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:32:44.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:44.531: INFO: Number of nodes with available pods: 0 Apr 10 21:32:44.531: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:32:45.555: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:45.558: INFO: Number of nodes with available pods: 1 Apr 10 21:32:45.558: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:32:46.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:46.530: INFO: Number of nodes with available pods: 2 Apr 10 21:32:46.530: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 10 21:32:46.622: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:46.652: INFO: Number of nodes with available pods: 1 Apr 10 21:32:46.652: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:32:47.657: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:47.660: INFO: Number of nodes with available pods: 1 Apr 10 21:32:47.660: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:32:48.657: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:48.661: INFO: Number of nodes with available pods: 1 Apr 10 21:32:48.661: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:32:49.657: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:32:49.661: INFO: Number of nodes with available pods: 2 Apr 10 21:32:49.661: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9450, will wait for the garbage collector to delete the pods Apr 10 21:32:49.726: INFO: Deleting DaemonSet.extensions daemon-set took: 6.173101ms Apr 10 21:32:50.326: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.215993ms Apr 10 21:32:59.561: INFO: Number of nodes with available pods: 0 Apr 10 21:32:59.561: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 21:32:59.564: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9450/daemonsets","resourceVersion":"7041245"},"items":null} Apr 10 21:32:59.567: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9450/pods","resourceVersion":"7041245"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:32:59.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9450" for this suite. • [SLOW TEST:17.149 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":96,"skipped":1707,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:32:59.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 10 21:32:59.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6284' Apr 10 21:32:59.963: INFO: stderr: "" Apr 10 21:32:59.963: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 10 21:33:00.967: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:33:00.967: INFO: Found 0 / 1 Apr 10 21:33:01.967: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:33:01.967: INFO: Found 0 / 1 Apr 10 21:33:02.968: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:33:02.968: INFO: Found 1 / 1 Apr 10 21:33:02.968: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 10 21:33:02.971: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:33:02.971: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 10 21:33:02.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-wjlds --namespace=kubectl-6284 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 10 21:33:03.075: INFO: stderr: "" Apr 10 21:33:03.075: INFO: stdout: "pod/agnhost-master-wjlds patched\n" STEP: checking annotations Apr 10 21:33:03.078: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 21:33:03.078: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:33:03.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6284" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":97,"skipped":1715,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:33:03.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4721 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4721 to expose endpoints map[] Apr 10 21:33:03.279: INFO: Get endpoints failed (64.604117ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 10 21:33:04.283: INFO: successfully validated that service endpoint-test2 in namespace services-4721 exposes endpoints map[] (1.068661897s elapsed) STEP: Creating pod pod1 in namespace services-4721 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4721 to expose endpoints map[pod1:[80]] Apr 10 21:33:08.395: INFO: successfully validated that service endpoint-test2 in namespace services-4721 exposes endpoints map[pod1:[80]] (4.088380584s elapsed) STEP: Creating pod pod2 in namespace services-4721 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4721 to expose endpoints map[pod1:[80] pod2:[80]] Apr 10 21:33:11.615: INFO: successfully validated that service endpoint-test2 in namespace services-4721 exposes endpoints map[pod1:[80] pod2:[80]] (3.20883661s elapsed) STEP: Deleting pod pod1 in namespace services-4721 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4721 to expose endpoints map[pod2:[80]] Apr 10 21:33:11.660: INFO: successfully validated that service endpoint-test2 in namespace services-4721 exposes endpoints map[pod2:[80]] (25.055904ms elapsed) STEP: Deleting pod pod2 in namespace services-4721 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4721 to expose endpoints map[] Apr 10 21:33:12.685: INFO: successfully validated that service endpoint-test2 in namespace services-4721 exposes endpoints map[] (1.020844065s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:33:12.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4721" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.626 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":98,"skipped":1735,"failed":0} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:33:12.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9336 STEP: creating replication controller nodeport-test in namespace services-9336 I0410 21:33:12.900283 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9336, replica count: 2 I0410 21:33:15.950747 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 21:33:18.950976 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 10 21:33:18.951: INFO: Creating new exec pod Apr 10 21:33:23.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9336 execpodxgj5r -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 10 21:33:24.194: INFO: stderr: "I0410 21:33:24.093809 1153 log.go:172] (0xc000a069a0) (0xc000671ae0) Create stream\nI0410 21:33:24.093871 1153 log.go:172] (0xc000a069a0) (0xc000671ae0) Stream added, broadcasting: 1\nI0410 21:33:24.096434 1153 log.go:172] (0xc000a069a0) Reply frame received for 1\nI0410 21:33:24.096462 1153 log.go:172] (0xc000a069a0) (0xc000671cc0) Create stream\nI0410 21:33:24.096473 1153 log.go:172] (0xc000a069a0) (0xc000671cc0) Stream added, broadcasting: 3\nI0410 21:33:24.097914 1153 log.go:172] (0xc000a069a0) Reply frame received for 3\nI0410 21:33:24.097966 1153 log.go:172] (0xc000a069a0) (0xc000671d60) Create stream\nI0410 21:33:24.097981 1153 log.go:172] (0xc000a069a0) (0xc000671d60) Stream added, broadcasting: 5\nI0410 21:33:24.099188 1153 log.go:172] (0xc000a069a0) Reply frame received for 5\nI0410 21:33:24.185885 1153 log.go:172] (0xc000a069a0) Data frame received for 5\nI0410 21:33:24.185915 1153 log.go:172] (0xc000671d60) (5) Data frame handling\nI0410 21:33:24.185934 1153 log.go:172] (0xc000671d60) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0410 21:33:24.186421 1153 log.go:172] (0xc000a069a0) Data frame received for 5\nI0410 21:33:24.186446 1153 log.go:172] (0xc000671d60) (5) Data frame handling\nI0410 21:33:24.186465 1153 log.go:172] (0xc000671d60) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0410 21:33:24.186603 1153 log.go:172] (0xc000a069a0) Data frame received for 5\nI0410 21:33:24.186622 1153 log.go:172] (0xc000671d60) (5) Data frame handling\nI0410 21:33:24.186745 1153 log.go:172] (0xc000a069a0) Data frame received for 3\nI0410 21:33:24.186764 1153 log.go:172] (0xc000671cc0) (3) Data frame handling\nI0410 21:33:24.188763 1153 log.go:172] (0xc000a069a0) Data frame received for 1\nI0410 21:33:24.188802 1153 log.go:172] (0xc000671ae0) (1) Data frame handling\nI0410 21:33:24.188836 1153 log.go:172] (0xc000671ae0) (1) Data frame sent\nI0410 21:33:24.188856 1153 log.go:172] (0xc000a069a0) (0xc000671ae0) Stream removed, broadcasting: 1\nI0410 21:33:24.188885 1153 log.go:172] (0xc000a069a0) Go away received\nI0410 21:33:24.189463 1153 log.go:172] (0xc000a069a0) (0xc000671ae0) Stream removed, broadcasting: 1\nI0410 21:33:24.189485 1153 log.go:172] (0xc000a069a0) (0xc000671cc0) Stream removed, broadcasting: 3\nI0410 21:33:24.189497 1153 log.go:172] (0xc000a069a0) (0xc000671d60) Stream removed, broadcasting: 5\n" Apr 10 21:33:24.194: INFO: stdout: "" Apr 10 21:33:24.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9336 execpodxgj5r -- /bin/sh -x -c nc -zv -t -w 2 10.97.31.74 80' Apr 10 21:33:24.401: INFO: stderr: "I0410 21:33:24.331897 1175 log.go:172] (0xc0000f9130) (0xc0006bb9a0) Create stream\nI0410 21:33:24.331967 1175 log.go:172] (0xc0000f9130) (0xc0006bb9a0) Stream added, broadcasting: 1\nI0410 21:33:24.334696 1175 log.go:172] (0xc0000f9130) Reply frame received for 1\nI0410 21:33:24.334744 1175 log.go:172] (0xc0000f9130) (0xc0006bbb80) Create stream\nI0410 21:33:24.334758 1175 log.go:172] (0xc0000f9130) (0xc0006bbb80) Stream added, broadcasting: 3\nI0410 21:33:24.335726 1175 log.go:172] (0xc0000f9130) Reply frame received for 3\nI0410 21:33:24.335761 1175 log.go:172] (0xc0000f9130) (0xc0006bbc20) Create stream\nI0410 21:33:24.335773 1175 log.go:172] (0xc0000f9130) (0xc0006bbc20) Stream added, broadcasting: 5\nI0410 21:33:24.336730 1175 log.go:172] (0xc0000f9130) Reply frame received for 5\nI0410 21:33:24.392906 1175 log.go:172] (0xc0000f9130) Data frame received for 5\nI0410 21:33:24.392943 1175 log.go:172] (0xc0006bbc20) (5) Data frame handling\nI0410 21:33:24.392958 1175 log.go:172] (0xc0006bbc20) (5) Data frame sent\n+ nc -zv -t -w 2 10.97.31.74 80\nConnection to 10.97.31.74 80 port [tcp/http] succeeded!\nI0410 21:33:24.392986 1175 log.go:172] (0xc0000f9130) Data frame received for 3\nI0410 21:33:24.393018 1175 log.go:172] (0xc0006bbb80) (3) Data frame handling\nI0410 21:33:24.393250 1175 log.go:172] (0xc0000f9130) Data frame received for 5\nI0410 21:33:24.393285 1175 log.go:172] (0xc0006bbc20) (5) Data frame handling\nI0410 21:33:24.394691 1175 log.go:172] (0xc0000f9130) Data frame received for 1\nI0410 21:33:24.394721 1175 log.go:172] (0xc0006bb9a0) (1) Data frame handling\nI0410 21:33:24.394754 1175 log.go:172] (0xc0006bb9a0) (1) Data frame sent\nI0410 21:33:24.394783 1175 log.go:172] (0xc0000f9130) (0xc0006bb9a0) Stream removed, broadcasting: 1\nI0410 21:33:24.394975 1175 log.go:172] (0xc0000f9130) Go away received\nI0410 21:33:24.395279 1175 log.go:172] (0xc0000f9130) (0xc0006bb9a0) Stream removed, broadcasting: 1\nI0410 21:33:24.395310 1175 log.go:172] (0xc0000f9130) (0xc0006bbb80) Stream removed, broadcasting: 3\nI0410 21:33:24.395334 1175 log.go:172] (0xc0000f9130) (0xc0006bbc20) Stream removed, broadcasting: 5\n" Apr 10 21:33:24.401: INFO: stdout: "" Apr 10 21:33:24.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9336 execpodxgj5r -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30729' Apr 10 21:33:24.635: INFO: stderr: "I0410 21:33:24.558397 1197 log.go:172] (0xc000a300b0) (0xc0004cf4a0) Create stream\nI0410 21:33:24.558469 1197 log.go:172] (0xc000a300b0) (0xc0004cf4a0) Stream added, broadcasting: 1\nI0410 21:33:24.561059 1197 log.go:172] (0xc000a300b0) Reply frame received for 1\nI0410 21:33:24.561260 1197 log.go:172] (0xc000a300b0) (0xc0009a2000) Create stream\nI0410 21:33:24.561291 1197 log.go:172] (0xc000a300b0) (0xc0009a2000) Stream added, broadcasting: 3\nI0410 21:33:24.562319 1197 log.go:172] (0xc000a300b0) Reply frame received for 3\nI0410 21:33:24.562355 1197 log.go:172] (0xc000a300b0) (0xc0009a20a0) Create stream\nI0410 21:33:24.562366 1197 log.go:172] (0xc000a300b0) (0xc0009a20a0) Stream added, broadcasting: 5\nI0410 21:33:24.563442 1197 log.go:172] (0xc000a300b0) Reply frame received for 5\nI0410 21:33:24.628378 1197 log.go:172] (0xc000a300b0) Data frame received for 5\nI0410 21:33:24.628425 1197 log.go:172] (0xc0009a20a0) (5) Data frame handling\nI0410 21:33:24.628452 1197 log.go:172] (0xc0009a20a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30729\nI0410 21:33:24.628722 1197 log.go:172] (0xc000a300b0) Data frame received for 5\nI0410 21:33:24.628794 1197 log.go:172] (0xc0009a20a0) (5) Data frame handling\nI0410 21:33:24.628817 1197 log.go:172] (0xc0009a20a0) (5) Data frame sent\nConnection to 172.17.0.10 30729 port [tcp/30729] succeeded!\nI0410 21:33:24.629607 1197 log.go:172] (0xc000a300b0) Data frame received for 5\nI0410 21:33:24.629655 1197 log.go:172] (0xc0009a20a0) (5) Data frame handling\nI0410 21:33:24.629683 1197 log.go:172] (0xc000a300b0) Data frame received for 3\nI0410 21:33:24.629704 1197 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0410 21:33:24.631307 1197 log.go:172] (0xc000a300b0) Data frame received for 1\nI0410 21:33:24.631333 1197 log.go:172] (0xc0004cf4a0) (1) Data frame handling\nI0410 21:33:24.631352 1197 log.go:172] (0xc0004cf4a0) (1) Data frame sent\nI0410 21:33:24.631374 1197 log.go:172] (0xc000a300b0) (0xc0004cf4a0) Stream removed, broadcasting: 1\nI0410 21:33:24.631401 1197 log.go:172] (0xc000a300b0) Go away received\nI0410 21:33:24.631757 1197 log.go:172] (0xc000a300b0) (0xc0004cf4a0) Stream removed, broadcasting: 1\nI0410 21:33:24.631771 1197 log.go:172] (0xc000a300b0) (0xc0009a2000) Stream removed, broadcasting: 3\nI0410 21:33:24.631778 1197 log.go:172] (0xc000a300b0) (0xc0009a20a0) Stream removed, broadcasting: 5\n" Apr 10 21:33:24.635: INFO: stdout: "" Apr 10 21:33:24.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9336 execpodxgj5r -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30729' Apr 10 21:33:24.857: INFO: stderr: "I0410 21:33:24.759943 1217 log.go:172] (0xc000b92000) (0xc000626780) Create stream\nI0410 21:33:24.760017 1217 log.go:172] (0xc000b92000) (0xc000626780) Stream added, broadcasting: 1\nI0410 21:33:24.770567 1217 log.go:172] (0xc000b92000) Reply frame received for 1\nI0410 21:33:24.770623 1217 log.go:172] (0xc000b92000) (0xc000451540) Create stream\nI0410 21:33:24.770633 1217 log.go:172] (0xc000b92000) (0xc000451540) Stream added, broadcasting: 3\nI0410 21:33:24.772649 1217 log.go:172] (0xc000b92000) Reply frame received for 3\nI0410 21:33:24.772679 1217 log.go:172] (0xc000b92000) (0xc000904000) Create stream\nI0410 21:33:24.772691 1217 log.go:172] (0xc000b92000) (0xc000904000) Stream added, broadcasting: 5\nI0410 21:33:24.773573 1217 log.go:172] (0xc000b92000) Reply frame received for 5\nI0410 21:33:24.851309 1217 log.go:172] (0xc000b92000) Data frame received for 3\nI0410 21:33:24.851357 1217 log.go:172] (0xc000451540) (3) Data frame handling\nI0410 21:33:24.851387 1217 log.go:172] (0xc000b92000) Data frame received for 5\nI0410 21:33:24.851401 1217 log.go:172] (0xc000904000) (5) Data frame handling\nI0410 21:33:24.851412 1217 log.go:172] (0xc000904000) (5) Data frame sent\nI0410 21:33:24.851417 1217 log.go:172] (0xc000b92000) Data frame received for 5\nI0410 21:33:24.851422 1217 log.go:172] (0xc000904000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30729\nConnection to 172.17.0.8 30729 port [tcp/30729] succeeded!\nI0410 21:33:24.852826 1217 log.go:172] (0xc000b92000) Data frame received for 1\nI0410 21:33:24.852849 1217 log.go:172] (0xc000626780) (1) Data frame handling\nI0410 21:33:24.852872 1217 log.go:172] (0xc000626780) (1) Data frame sent\nI0410 21:33:24.852891 1217 log.go:172] (0xc000b92000) (0xc000626780) Stream removed, broadcasting: 1\nI0410 21:33:24.852979 1217 log.go:172] (0xc000b92000) Go away received\nI0410 21:33:24.853404 1217 log.go:172] (0xc000b92000) (0xc000626780) Stream removed, broadcasting: 1\nI0410 21:33:24.853432 1217 log.go:172] (0xc000b92000) (0xc000451540) Stream removed, broadcasting: 3\nI0410 21:33:24.853450 1217 log.go:172] (0xc000b92000) (0xc000904000) Stream removed, broadcasting: 5\n" Apr 10 21:33:24.858: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:33:24.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9336" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.153 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":99,"skipped":1738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:33:24.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 10 21:33:29.463: INFO: Successfully updated pod "pod-update-76395cfd-ebb0-43e5-acc1-e57fef7ae775" STEP: verifying the updated pod is in kubernetes Apr 10 21:33:29.470: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:33:29.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9854" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1775,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:33:29.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 10 21:33:32.711: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:33:32.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6578" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1797,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:33:32.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 10 21:33:41.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 10 21:33:41.332: INFO: Pod pod-with-poststart-http-hook still exists Apr 10 21:33:43.332: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 10 21:33:43.336: INFO: Pod pod-with-poststart-http-hook still exists Apr 10 21:33:45.332: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 10 21:33:45.336: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:33:45.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4975" for this suite. • [SLOW TEST:12.377 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1800,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:33:45.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:33:45.390: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 10 21:33:47.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7610 create -f -' Apr 10 21:33:50.371: INFO: stderr: "" Apr 10 21:33:50.371: INFO: stdout: "e2e-test-crd-publish-openapi-5988-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 10 21:33:50.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7610 delete e2e-test-crd-publish-openapi-5988-crds test-cr' Apr 10 21:33:50.497: INFO: stderr: "" Apr 10 21:33:50.497: INFO: stdout: "e2e-test-crd-publish-openapi-5988-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 10 21:33:50.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7610 apply -f -' Apr 10 21:33:51.095: INFO: stderr: "" Apr 10 21:33:51.095: INFO: stdout: "e2e-test-crd-publish-openapi-5988-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 10 21:33:51.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7610 delete e2e-test-crd-publish-openapi-5988-crds test-cr' Apr 10 21:33:51.203: INFO: stderr: "" Apr 10 21:33:51.203: INFO: stdout: "e2e-test-crd-publish-openapi-5988-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 10 21:33:51.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5988-crds' Apr 10 21:33:51.569: INFO: stderr: "" Apr 10 21:33:51.569: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5988-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:33:54.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7610" for this suite. • [SLOW TEST:9.124 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":103,"skipped":1801,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:33:54.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:33:54.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c27e380d-3cae-45cf-b2d9-e26881500c60" in namespace "projected-5932" to be "success or failure" Apr 10 21:33:54.553: INFO: Pod "downwardapi-volume-c27e380d-3cae-45cf-b2d9-e26881500c60": Phase="Pending", Reason="", readiness=false. Elapsed: 3.737421ms Apr 10 21:33:56.557: INFO: Pod "downwardapi-volume-c27e380d-3cae-45cf-b2d9-e26881500c60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007288751s Apr 10 21:33:58.560: INFO: Pod "downwardapi-volume-c27e380d-3cae-45cf-b2d9-e26881500c60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010638225s STEP: Saw pod success Apr 10 21:33:58.560: INFO: Pod "downwardapi-volume-c27e380d-3cae-45cf-b2d9-e26881500c60" satisfied condition "success or failure" Apr 10 21:33:58.563: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c27e380d-3cae-45cf-b2d9-e26881500c60 container client-container: STEP: delete the pod Apr 10 21:33:58.597: INFO: Waiting for pod downwardapi-volume-c27e380d-3cae-45cf-b2d9-e26881500c60 to disappear Apr 10 21:33:58.601: INFO: Pod downwardapi-volume-c27e380d-3cae-45cf-b2d9-e26881500c60 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:33:58.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5932" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1828,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:33:58.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 10 21:33:58.693: INFO: Waiting up to 5m0s for pod "pod-0d66ae27-9c87-473d-b8c2-61d1414d8778" in namespace "emptydir-4237" to be "success or failure" Apr 10 21:33:58.721: INFO: Pod "pod-0d66ae27-9c87-473d-b8c2-61d1414d8778": Phase="Pending", Reason="", readiness=false. Elapsed: 27.748734ms Apr 10 21:34:00.726: INFO: Pod "pod-0d66ae27-9c87-473d-b8c2-61d1414d8778": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032005875s Apr 10 21:34:02.741: INFO: Pod "pod-0d66ae27-9c87-473d-b8c2-61d1414d8778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047755362s STEP: Saw pod success Apr 10 21:34:02.741: INFO: Pod "pod-0d66ae27-9c87-473d-b8c2-61d1414d8778" satisfied condition "success or failure" Apr 10 21:34:02.743: INFO: Trying to get logs from node jerma-worker pod pod-0d66ae27-9c87-473d-b8c2-61d1414d8778 container test-container: STEP: delete the pod Apr 10 21:34:02.771: INFO: Waiting for pod pod-0d66ae27-9c87-473d-b8c2-61d1414d8778 to disappear Apr 10 21:34:02.775: INFO: Pod pod-0d66ae27-9c87-473d-b8c2-61d1414d8778 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:34:02.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4237" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1851,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:34:02.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:34:02.876: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 10 21:34:04.922: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:34:06.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2501" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":106,"skipped":1855,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:34:06.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 10 21:34:06.545: INFO: Waiting up to 5m0s for pod "pod-5ce53634-677d-4672-b6f5-bc331bd2c8f7" in namespace "emptydir-8592" to be "success or failure" Apr 10 21:34:06.999: INFO: Pod "pod-5ce53634-677d-4672-b6f5-bc331bd2c8f7": Phase="Pending", Reason="", readiness=false. Elapsed: 454.002871ms Apr 10 21:34:09.003: INFO: Pod "pod-5ce53634-677d-4672-b6f5-bc331bd2c8f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457743463s Apr 10 21:34:11.007: INFO: Pod "pod-5ce53634-677d-4672-b6f5-bc331bd2c8f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.461228254s STEP: Saw pod success Apr 10 21:34:11.007: INFO: Pod "pod-5ce53634-677d-4672-b6f5-bc331bd2c8f7" satisfied condition "success or failure" Apr 10 21:34:11.010: INFO: Trying to get logs from node jerma-worker pod pod-5ce53634-677d-4672-b6f5-bc331bd2c8f7 container test-container: STEP: delete the pod Apr 10 21:34:11.034: INFO: Waiting for pod pod-5ce53634-677d-4672-b6f5-bc331bd2c8f7 to disappear Apr 10 21:34:11.044: INFO: Pod pod-5ce53634-677d-4672-b6f5-bc331bd2c8f7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:34:11.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8592" for this suite. • [SLOW TEST:5.016 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1867,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:34:11.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 10 21:34:11.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6640' Apr 10 21:34:11.246: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 21:34:11.246: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 10 21:34:11.268: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 10 21:34:11.287: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 10 21:34:11.333: INFO: scanned /root for discovery docs: Apr 10 21:34:11.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6640' Apr 10 21:34:27.295: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 10 21:34:27.296: INFO: stdout: "Created e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b\nScaling up e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 10 21:34:27.296: INFO: stdout: "Created e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b\nScaling up e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 10 21:34:27.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6640' Apr 10 21:34:27.387: INFO: stderr: "" Apr 10 21:34:27.387: INFO: stdout: "e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b-5sqtk e2e-test-httpd-rc-sq68w " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Apr 10 21:34:32.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6640' Apr 10 21:34:32.482: INFO: stderr: "" Apr 10 21:34:32.482: INFO: stdout: "e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b-5sqtk " Apr 10 21:34:32.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b-5sqtk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6640' Apr 10 21:34:32.573: INFO: stderr: "" Apr 10 21:34:32.573: INFO: stdout: "true" Apr 10 21:34:32.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b-5sqtk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6640' Apr 10 21:34:32.676: INFO: stderr: "" Apr 10 21:34:32.676: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 10 21:34:32.676: INFO: e2e-test-httpd-rc-a5ffd0948ba5ba9fef3d9af4ae690d5b-5sqtk is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Apr 10 21:34:32.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6640' Apr 10 21:34:32.789: INFO: stderr: "" Apr 10 21:34:32.789: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:34:32.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6640" for this suite. • [SLOW TEST:21.743 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":108,"skipped":1878,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:34:32.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 10 21:34:32.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5844' Apr 10 21:34:32.985: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 21:34:32.985: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Apr 10 21:34:35.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5844' Apr 10 21:34:35.263: INFO: stderr: "" Apr 10 21:34:35.263: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:34:35.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5844" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":109,"skipped":1896,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:34:35.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:34:35.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93" in namespace "downward-api-2955" to be "success or failure" Apr 10 21:34:35.396: INFO: Pod "downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93": Phase="Pending", Reason="", readiness=false. Elapsed: 18.641456ms Apr 10 21:34:37.431: INFO: Pod "downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052797122s Apr 10 21:34:39.467: INFO: Pod "downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089338852s Apr 10 21:34:41.471: INFO: Pod "downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093626555s STEP: Saw pod success Apr 10 21:34:41.471: INFO: Pod "downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93" satisfied condition "success or failure" Apr 10 21:34:41.474: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93 container client-container: STEP: delete the pod Apr 10 21:34:41.546: INFO: Waiting for pod downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93 to disappear Apr 10 21:34:41.558: INFO: Pod downwardapi-volume-77dd4f09-70af-43f3-a8ef-207cf1658e93 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:34:41.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2955" for this suite. • [SLOW TEST:6.284 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1898,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:34:41.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:34:57.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8876" for this suite. • [SLOW TEST:16.263 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":111,"skipped":1911,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:34:57.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:34:58.376: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:35:00.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151298, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151298, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151298, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151298, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:35:03.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:35:03.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1892-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:04.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1582" for this suite. STEP: Destroying namespace "webhook-1582-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.857 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":112,"skipped":1914,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:04.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:35:04.781: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-24c50b96-b6e9-4f1f-b39f-3e47d955b1c3" in namespace "security-context-test-8443" to be "success or failure" Apr 10 21:35:04.787: INFO: Pod "busybox-privileged-false-24c50b96-b6e9-4f1f-b39f-3e47d955b1c3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.310836ms Apr 10 21:35:06.791: INFO: Pod "busybox-privileged-false-24c50b96-b6e9-4f1f-b39f-3e47d955b1c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009483617s Apr 10 21:35:08.802: INFO: Pod "busybox-privileged-false-24c50b96-b6e9-4f1f-b39f-3e47d955b1c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020891548s Apr 10 21:35:08.802: INFO: Pod "busybox-privileged-false-24c50b96-b6e9-4f1f-b39f-3e47d955b1c3" satisfied condition "success or failure" Apr 10 21:35:08.807: INFO: Got logs for pod "busybox-privileged-false-24c50b96-b6e9-4f1f-b39f-3e47d955b1c3": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:08.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8443" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1933,"failed":0} S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:08.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 10 21:35:08.860: INFO: Waiting up to 5m0s for pod "downward-api-b03e8fba-cd1e-4bcf-8d46-4407e9069878" in namespace "downward-api-6936" to be "success or failure" Apr 10 21:35:08.871: INFO: Pod "downward-api-b03e8fba-cd1e-4bcf-8d46-4407e9069878": Phase="Pending", Reason="", readiness=false. Elapsed: 10.520561ms Apr 10 21:35:10.875: INFO: Pod "downward-api-b03e8fba-cd1e-4bcf-8d46-4407e9069878": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0140187s Apr 10 21:35:12.879: INFO: Pod "downward-api-b03e8fba-cd1e-4bcf-8d46-4407e9069878": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018258215s STEP: Saw pod success Apr 10 21:35:12.879: INFO: Pod "downward-api-b03e8fba-cd1e-4bcf-8d46-4407e9069878" satisfied condition "success or failure" Apr 10 21:35:12.882: INFO: Trying to get logs from node jerma-worker2 pod downward-api-b03e8fba-cd1e-4bcf-8d46-4407e9069878 container dapi-container: STEP: delete the pod Apr 10 21:35:12.942: INFO: Waiting for pod downward-api-b03e8fba-cd1e-4bcf-8d46-4407e9069878 to disappear Apr 10 21:35:12.951: INFO: Pod downward-api-b03e8fba-cd1e-4bcf-8d46-4407e9069878 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:12.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6936" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1934,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:12.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 10 21:35:13.006: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:27.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7053" for this suite. • [SLOW TEST:14.291 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":115,"skipped":1935,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:27.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:35:27.434: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:28.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4424" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":116,"skipped":1938,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:28.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:35:28.761: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c51672d-d3da-4b99-8a28-aefe95a6c9a7" in namespace "downward-api-4424" to be "success or failure" Apr 10 21:35:28.763: INFO: Pod "downwardapi-volume-5c51672d-d3da-4b99-8a28-aefe95a6c9a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.592068ms Apr 10 21:35:30.800: INFO: Pod "downwardapi-volume-5c51672d-d3da-4b99-8a28-aefe95a6c9a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038705373s Apr 10 21:35:32.803: INFO: Pod "downwardapi-volume-5c51672d-d3da-4b99-8a28-aefe95a6c9a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04205142s STEP: Saw pod success Apr 10 21:35:32.803: INFO: Pod "downwardapi-volume-5c51672d-d3da-4b99-8a28-aefe95a6c9a7" satisfied condition "success or failure" Apr 10 21:35:32.805: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5c51672d-d3da-4b99-8a28-aefe95a6c9a7 container client-container: STEP: delete the pod Apr 10 21:35:32.825: INFO: Waiting for pod downwardapi-volume-5c51672d-d3da-4b99-8a28-aefe95a6c9a7 to disappear Apr 10 21:35:32.886: INFO: Pod downwardapi-volume-5c51672d-d3da-4b99-8a28-aefe95a6c9a7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:32.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4424" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1953,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:32.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1486/configmap-test-c5b61a58-07a5-4772-817e-70ccfc5a876c STEP: Creating a pod to test consume configMaps Apr 10 21:35:32.963: INFO: Waiting up to 5m0s for pod "pod-configmaps-62d0b990-ff62-4d6c-baba-56581d9b8a4f" in namespace "configmap-1486" to be "success or failure" Apr 10 21:35:32.967: INFO: Pod "pod-configmaps-62d0b990-ff62-4d6c-baba-56581d9b8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.427379ms Apr 10 21:35:34.970: INFO: Pod "pod-configmaps-62d0b990-ff62-4d6c-baba-56581d9b8a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006909306s Apr 10 21:35:36.974: INFO: Pod "pod-configmaps-62d0b990-ff62-4d6c-baba-56581d9b8a4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010371483s STEP: Saw pod success Apr 10 21:35:36.974: INFO: Pod "pod-configmaps-62d0b990-ff62-4d6c-baba-56581d9b8a4f" satisfied condition "success or failure" Apr 10 21:35:36.976: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-62d0b990-ff62-4d6c-baba-56581d9b8a4f container env-test: STEP: delete the pod Apr 10 21:35:37.001: INFO: Waiting for pod pod-configmaps-62d0b990-ff62-4d6c-baba-56581d9b8a4f to disappear Apr 10 21:35:37.012: INFO: Pod pod-configmaps-62d0b990-ff62-4d6c-baba-56581d9b8a4f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:37.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1486" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1958,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:37.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f0959305-5599-4285-a289-dd57f6e34fdd STEP: Creating a pod to test consume configMaps Apr 10 21:35:37.097: INFO: Waiting up to 5m0s for pod "pod-configmaps-193f681d-2950-40fb-944b-87b07b9fd0ed" in namespace "configmap-7398" to be "success or failure" Apr 10 21:35:37.135: INFO: Pod "pod-configmaps-193f681d-2950-40fb-944b-87b07b9fd0ed": Phase="Pending", Reason="", readiness=false. Elapsed: 37.94645ms Apr 10 21:35:39.192: INFO: Pod "pod-configmaps-193f681d-2950-40fb-944b-87b07b9fd0ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095204826s Apr 10 21:35:41.324: INFO: Pod "pod-configmaps-193f681d-2950-40fb-944b-87b07b9fd0ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.227200289s STEP: Saw pod success Apr 10 21:35:41.324: INFO: Pod "pod-configmaps-193f681d-2950-40fb-944b-87b07b9fd0ed" satisfied condition "success or failure" Apr 10 21:35:41.327: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-193f681d-2950-40fb-944b-87b07b9fd0ed container configmap-volume-test: STEP: delete the pod Apr 10 21:35:41.367: INFO: Waiting for pod pod-configmaps-193f681d-2950-40fb-944b-87b07b9fd0ed to disappear Apr 10 21:35:41.371: INFO: Pod pod-configmaps-193f681d-2950-40fb-944b-87b07b9fd0ed no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:41.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7398" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1966,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:41.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 10 21:35:41.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-891' Apr 10 21:35:41.816: INFO: stderr: "" Apr 10 21:35:41.816: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 21:35:41.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-891' Apr 10 21:35:41.939: INFO: stderr: "" Apr 10 21:35:41.939: INFO: stdout: "update-demo-nautilus-8btw6 update-demo-nautilus-spzzt " Apr 10 21:35:41.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8btw6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-891' Apr 10 21:35:42.034: INFO: stderr: "" Apr 10 21:35:42.034: INFO: stdout: "" Apr 10 21:35:42.034: INFO: update-demo-nautilus-8btw6 is created but not running Apr 10 21:35:47.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-891' Apr 10 21:35:47.149: INFO: stderr: "" Apr 10 21:35:47.149: INFO: stdout: "update-demo-nautilus-8btw6 update-demo-nautilus-spzzt " Apr 10 21:35:47.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8btw6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-891' Apr 10 21:35:47.264: INFO: stderr: "" Apr 10 21:35:47.264: INFO: stdout: "true" Apr 10 21:35:47.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8btw6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-891' Apr 10 21:35:47.360: INFO: stderr: "" Apr 10 21:35:47.360: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 21:35:47.360: INFO: validating pod update-demo-nautilus-8btw6 Apr 10 21:35:47.364: INFO: got data: { "image": "nautilus.jpg" } Apr 10 21:35:47.364: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 21:35:47.364: INFO: update-demo-nautilus-8btw6 is verified up and running Apr 10 21:35:47.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-spzzt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-891' Apr 10 21:35:47.455: INFO: stderr: "" Apr 10 21:35:47.455: INFO: stdout: "true" Apr 10 21:35:47.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-spzzt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-891' Apr 10 21:35:47.549: INFO: stderr: "" Apr 10 21:35:47.549: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 21:35:47.549: INFO: validating pod update-demo-nautilus-spzzt Apr 10 21:35:47.589: INFO: got data: { "image": "nautilus.jpg" } Apr 10 21:35:47.589: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 21:35:47.589: INFO: update-demo-nautilus-spzzt is verified up and running STEP: using delete to clean up resources Apr 10 21:35:47.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-891' Apr 10 21:35:47.711: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 21:35:47.711: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 10 21:35:47.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:47.805: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:47.805: INFO: stdout: "" Apr 10 21:35:47.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:47.920: INFO: stderr: "" Apr 10 21:35:47.920: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:48.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:48.520: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:48.521: INFO: stdout: "" Apr 10 21:35:48.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:48.613: INFO: stderr: "" Apr 10 21:35:48.613: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:48.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:49.040: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:49.040: INFO: stdout: "" Apr 10 21:35:49.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:49.120: INFO: stderr: "" Apr 10 21:35:49.120: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:49.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:49.519: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:49.519: INFO: stdout: "" Apr 10 21:35:49.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:49.615: INFO: stderr: "" Apr 10 21:35:49.615: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:49.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:50.027: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:50.027: INFO: stdout: "" Apr 10 21:35:50.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:50.132: INFO: stderr: "" Apr 10 21:35:50.132: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:50.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:50.535: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:50.535: INFO: stdout: "" Apr 10 21:35:50.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:50.641: INFO: stderr: "" Apr 10 21:35:50.641: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:50.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:51.024: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:51.024: INFO: stdout: "" Apr 10 21:35:51.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:51.125: INFO: stderr: "" Apr 10 21:35:51.125: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:51.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:51.535: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:51.535: INFO: stdout: "" Apr 10 21:35:51.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:51.628: INFO: stderr: "" Apr 10 21:35:51.629: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:51.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:52.032: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:52.032: INFO: stdout: "" Apr 10 21:35:52.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:52.133: INFO: stderr: "" Apr 10 21:35:52.133: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:52.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:52.513: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:52.513: INFO: stdout: "" Apr 10 21:35:52.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:52.615: INFO: stderr: "" Apr 10 21:35:52.615: INFO: stdout: "update-demo-nautilus-8btw6\nupdate-demo-nautilus-spzzt\n" Apr 10 21:35:52.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-891' Apr 10 21:35:53.017: INFO: stderr: "No resources found in kubectl-891 namespace.\n" Apr 10 21:35:53.017: INFO: stdout: "" Apr 10 21:35:53.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-891 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 21:35:53.119: INFO: stderr: "" Apr 10 21:35:53.119: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:35:53.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-891" for this suite. • [SLOW TEST:11.750 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":120,"skipped":1969,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:35:53.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2681 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 10 21:35:53.330: INFO: Found 0 stateful pods, waiting for 3 Apr 10 21:36:03.335: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:36:03.335: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:36:03.335: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 10 21:36:13.335: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:36:13.335: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:36:13.335: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 10 21:36:13.362: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 10 21:36:23.420: INFO: Updating stateful set ss2 Apr 10 21:36:23.466: INFO: Waiting for Pod statefulset-2681/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 10 21:36:33.920: INFO: Found 2 stateful pods, waiting for 3 Apr 10 21:36:43.925: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:36:43.925: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:36:43.925: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 10 21:36:43.949: INFO: Updating stateful set ss2 Apr 10 21:36:43.994: INFO: Waiting for Pod statefulset-2681/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 10 21:36:54.019: INFO: Updating stateful set ss2 Apr 10 21:36:54.048: INFO: Waiting for StatefulSet statefulset-2681/ss2 to complete update Apr 10 21:36:54.048: INFO: Waiting for Pod statefulset-2681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 10 21:37:04.055: INFO: Deleting all statefulset in ns statefulset-2681 Apr 10 21:37:04.058: INFO: Scaling statefulset ss2 to 0 Apr 10 21:37:14.101: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:37:14.103: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:37:14.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2681" for this suite. • [SLOW TEST:80.994 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":121,"skipped":1971,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:37:14.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-520d4db5-95d1-48a0-93dd-2bb9290b6e1a STEP: Creating a pod to test consume configMaps Apr 10 21:37:14.192: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90fad490-3563-4985-857c-9d70415afb3a" in namespace "projected-7659" to be "success or failure" Apr 10 21:37:14.208: INFO: Pod "pod-projected-configmaps-90fad490-3563-4985-857c-9d70415afb3a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.275577ms Apr 10 21:37:16.212: INFO: Pod "pod-projected-configmaps-90fad490-3563-4985-857c-9d70415afb3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020546892s Apr 10 21:37:18.216: INFO: Pod "pod-projected-configmaps-90fad490-3563-4985-857c-9d70415afb3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024002909s STEP: Saw pod success Apr 10 21:37:18.216: INFO: Pod "pod-projected-configmaps-90fad490-3563-4985-857c-9d70415afb3a" satisfied condition "success or failure" Apr 10 21:37:18.218: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-90fad490-3563-4985-857c-9d70415afb3a container projected-configmap-volume-test: STEP: delete the pod Apr 10 21:37:18.247: INFO: Waiting for pod pod-projected-configmaps-90fad490-3563-4985-857c-9d70415afb3a to disappear Apr 10 21:37:18.264: INFO: Pod pod-projected-configmaps-90fad490-3563-4985-857c-9d70415afb3a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:37:18.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7659" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2004,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:37:18.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:37:25.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8292" for this suite. • [SLOW TEST:7.085 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":123,"skipped":2006,"failed":0} [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:37:25.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0410 21:37:55.980606 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 21:37:55.980: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:37:55.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3444" for this suite. • [SLOW TEST:30.631 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":124,"skipped":2006,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:37:55.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:37:56.045: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:37:57.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-746" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":125,"skipped":2020,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:37:57.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 10 21:37:57.176: INFO: >>> kubeConfig: /root/.kube/config Apr 10 21:38:00.098: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:38:10.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9775" for this suite. • [SLOW TEST:13.568 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":126,"skipped":2037,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:38:10.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 10 21:38:10.751: INFO: Waiting up to 5m0s for pod "var-expansion-e2312400-8e61-4009-a5d4-22d94f0d4639" in namespace "var-expansion-1583" to be "success or failure" Apr 10 21:38:10.767: INFO: Pod "var-expansion-e2312400-8e61-4009-a5d4-22d94f0d4639": Phase="Pending", Reason="", readiness=false. Elapsed: 16.384386ms Apr 10 21:38:12.773: INFO: Pod "var-expansion-e2312400-8e61-4009-a5d4-22d94f0d4639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022199634s Apr 10 21:38:14.776: INFO: Pod "var-expansion-e2312400-8e61-4009-a5d4-22d94f0d4639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025466036s STEP: Saw pod success Apr 10 21:38:14.777: INFO: Pod "var-expansion-e2312400-8e61-4009-a5d4-22d94f0d4639" satisfied condition "success or failure" Apr 10 21:38:14.779: INFO: Trying to get logs from node jerma-worker pod var-expansion-e2312400-8e61-4009-a5d4-22d94f0d4639 container dapi-container: STEP: delete the pod Apr 10 21:38:14.810: INFO: Waiting for pod var-expansion-e2312400-8e61-4009-a5d4-22d94f0d4639 to disappear Apr 10 21:38:14.858: INFO: Pod var-expansion-e2312400-8e61-4009-a5d4-22d94f0d4639 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:38:14.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1583" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2040,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:38:14.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 10 21:38:21.604: INFO: 10 pods remaining Apr 10 21:38:21.604: INFO: 0 pods has nil DeletionTimestamp Apr 10 21:38:21.604: INFO: Apr 10 21:38:22.067: INFO: 0 pods remaining Apr 10 21:38:22.067: INFO: 0 pods has nil DeletionTimestamp Apr 10 21:38:22.067: INFO: STEP: Gathering metrics W0410 21:38:22.977878 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 21:38:22.977: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:38:22.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1667" for this suite. • [SLOW TEST:8.118 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":128,"skipped":2054,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:38:22.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 10 21:38:23.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1999 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 10 21:38:26.297: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0410 21:38:26.207142 2183 log.go:172] (0xc000362d10) (0xc0009a8320) Create stream\nI0410 21:38:26.207188 2183 log.go:172] (0xc000362d10) (0xc0009a8320) Stream added, broadcasting: 1\nI0410 21:38:26.210279 2183 log.go:172] (0xc000362d10) Reply frame received for 1\nI0410 21:38:26.210336 2183 log.go:172] (0xc000362d10) (0xc00097a000) Create stream\nI0410 21:38:26.210351 2183 log.go:172] (0xc000362d10) (0xc00097a000) Stream added, broadcasting: 3\nI0410 21:38:26.211267 2183 log.go:172] (0xc000362d10) Reply frame received for 3\nI0410 21:38:26.211323 2183 log.go:172] (0xc000362d10) (0xc000a120a0) Create stream\nI0410 21:38:26.211347 2183 log.go:172] (0xc000362d10) (0xc000a120a0) Stream added, broadcasting: 5\nI0410 21:38:26.212299 2183 log.go:172] (0xc000362d10) Reply frame received for 5\nI0410 21:38:26.212347 2183 log.go:172] (0xc000362d10) (0xc00097a0a0) Create stream\nI0410 21:38:26.212370 2183 log.go:172] (0xc000362d10) (0xc00097a0a0) Stream added, broadcasting: 7\nI0410 21:38:26.213528 2183 log.go:172] (0xc000362d10) Reply frame received for 7\nI0410 21:38:26.214060 2183 log.go:172] (0xc00097a000) (3) Writing data frame\nI0410 21:38:26.214795 2183 log.go:172] (0xc00097a000) (3) Writing data frame\nI0410 21:38:26.216269 2183 log.go:172] (0xc000362d10) Data frame received for 5\nI0410 21:38:26.216297 2183 log.go:172] (0xc000a120a0) (5) Data frame handling\nI0410 21:38:26.216322 2183 log.go:172] (0xc000a120a0) (5) Data frame sent\nI0410 21:38:26.217330 2183 log.go:172] (0xc000362d10) Data frame received for 5\nI0410 21:38:26.217351 2183 log.go:172] (0xc000a120a0) (5) Data frame handling\nI0410 21:38:26.217365 2183 log.go:172] (0xc000a120a0) (5) Data frame sent\nI0410 21:38:26.261064 2183 log.go:172] (0xc000362d10) Data frame received for 1\nI0410 21:38:26.261098 2183 log.go:172] (0xc0009a8320) (1) Data frame handling\nI0410 21:38:26.261121 2183 log.go:172] (0xc0009a8320) (1) Data frame sent\nI0410 21:38:26.261200 2183 log.go:172] (0xc000362d10) (0xc0009a8320) Stream removed, broadcasting: 1\nI0410 21:38:26.261297 2183 log.go:172] (0xc000362d10) (0xc00097a000) Stream removed, broadcasting: 3\nI0410 21:38:26.261396 2183 log.go:172] (0xc000362d10) Data frame received for 7\nI0410 21:38:26.261443 2183 log.go:172] (0xc00097a0a0) (7) Data frame handling\nI0410 21:38:26.261485 2183 log.go:172] (0xc000362d10) Data frame received for 5\nI0410 21:38:26.261498 2183 log.go:172] (0xc000a120a0) (5) Data frame handling\nI0410 21:38:26.261507 2183 log.go:172] (0xc000362d10) Go away received\nI0410 21:38:26.261658 2183 log.go:172] (0xc000362d10) (0xc0009a8320) Stream removed, broadcasting: 1\nI0410 21:38:26.261668 2183 log.go:172] (0xc000362d10) (0xc00097a000) Stream removed, broadcasting: 3\nI0410 21:38:26.261672 2183 log.go:172] (0xc000362d10) (0xc000a120a0) Stream removed, broadcasting: 5\nI0410 21:38:26.261677 2183 log.go:172] (0xc000362d10) (0xc00097a0a0) Stream removed, broadcasting: 7\n" Apr 10 21:38:26.297: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:38:28.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1999" for this suite. • [SLOW TEST:5.505 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":129,"skipped":2057,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:38:28.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-828 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-828 STEP: Deleting pre-stop pod Apr 10 21:38:41.633: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:38:41.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-828" for this suite. • [SLOW TEST:13.260 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":130,"skipped":2075,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:38:41.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:38:46.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4610" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2086,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:38:46.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-814 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-814 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-814 Apr 10 21:38:46.372: INFO: Found 0 stateful pods, waiting for 1 Apr 10 21:38:56.376: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 10 21:38:56.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:38:56.639: INFO: stderr: "I0410 21:38:56.521757 2209 log.go:172] (0xc0000f53f0) (0xc0006dbcc0) Create stream\nI0410 21:38:56.521848 2209 log.go:172] (0xc0000f53f0) (0xc0006dbcc0) Stream added, broadcasting: 1\nI0410 21:38:56.533404 2209 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0410 21:38:56.533467 2209 log.go:172] (0xc0000f53f0) (0xc000824000) Create stream\nI0410 21:38:56.533480 2209 log.go:172] (0xc0000f53f0) (0xc000824000) Stream added, broadcasting: 3\nI0410 21:38:56.534462 2209 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0410 21:38:56.534487 2209 log.go:172] (0xc0000f53f0) (0xc000824140) Create stream\nI0410 21:38:56.534495 2209 log.go:172] (0xc0000f53f0) (0xc000824140) Stream added, broadcasting: 5\nI0410 21:38:56.535323 2209 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0410 21:38:56.606757 2209 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0410 21:38:56.606778 2209 log.go:172] (0xc000824140) (5) Data frame handling\nI0410 21:38:56.606790 2209 log.go:172] (0xc000824140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:38:56.632918 2209 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0410 21:38:56.632959 2209 log.go:172] (0xc000824000) (3) Data frame handling\nI0410 21:38:56.632978 2209 log.go:172] (0xc000824000) (3) Data frame sent\nI0410 21:38:56.633078 2209 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0410 21:38:56.633094 2209 log.go:172] (0xc000824000) (3) Data frame handling\nI0410 21:38:56.633252 2209 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0410 21:38:56.633280 2209 log.go:172] (0xc000824140) (5) Data frame handling\nI0410 21:38:56.635163 2209 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0410 21:38:56.635202 2209 log.go:172] (0xc0006dbcc0) (1) Data frame handling\nI0410 21:38:56.635223 2209 log.go:172] (0xc0006dbcc0) (1) Data frame sent\nI0410 21:38:56.635240 2209 log.go:172] (0xc0000f53f0) (0xc0006dbcc0) Stream removed, broadcasting: 1\nI0410 21:38:56.635270 2209 log.go:172] (0xc0000f53f0) Go away received\nI0410 21:38:56.635606 2209 log.go:172] (0xc0000f53f0) (0xc0006dbcc0) Stream removed, broadcasting: 1\nI0410 21:38:56.635627 2209 log.go:172] (0xc0000f53f0) (0xc000824000) Stream removed, broadcasting: 3\nI0410 21:38:56.635635 2209 log.go:172] (0xc0000f53f0) (0xc000824140) Stream removed, broadcasting: 5\n" Apr 10 21:38:56.639: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:38:56.639: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:38:56.642: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 10 21:39:06.658: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 10 21:39:06.658: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:39:06.710: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:06.710: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC }] Apr 10 21:39:06.710: INFO: Apr 10 21:39:06.710: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 10 21:39:07.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.955011935s Apr 10 21:39:08.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.913025855s Apr 10 21:39:09.805: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.865216411s Apr 10 21:39:10.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.860509631s Apr 10 21:39:11.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.855483171s Apr 10 21:39:12.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.851280793s Apr 10 21:39:13.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.84802385s Apr 10 21:39:14.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.834278156s Apr 10 21:39:15.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 829.442215ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-814 Apr 10 21:39:16.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:39:17.086: INFO: stderr: "I0410 21:39:17.001100 2232 log.go:172] (0xc0009da6e0) (0xc00096a140) Create stream\nI0410 21:39:17.001303 2232 log.go:172] (0xc0009da6e0) (0xc00096a140) Stream added, broadcasting: 1\nI0410 21:39:17.004065 2232 log.go:172] (0xc0009da6e0) Reply frame received for 1\nI0410 21:39:17.004134 2232 log.go:172] (0xc0009da6e0) (0xc00065b9a0) Create stream\nI0410 21:39:17.004156 2232 log.go:172] (0xc0009da6e0) (0xc00065b9a0) Stream added, broadcasting: 3\nI0410 21:39:17.005708 2232 log.go:172] (0xc0009da6e0) Reply frame received for 3\nI0410 21:39:17.005761 2232 log.go:172] (0xc0009da6e0) (0xc00096a280) Create stream\nI0410 21:39:17.005770 2232 log.go:172] (0xc0009da6e0) (0xc00096a280) Stream added, broadcasting: 5\nI0410 21:39:17.006788 2232 log.go:172] (0xc0009da6e0) Reply frame received for 5\nI0410 21:39:17.079889 2232 log.go:172] (0xc0009da6e0) Data frame received for 3\nI0410 21:39:17.079928 2232 log.go:172] (0xc00065b9a0) (3) Data frame handling\nI0410 21:39:17.079939 2232 log.go:172] (0xc00065b9a0) (3) Data frame sent\nI0410 21:39:17.079947 2232 log.go:172] (0xc0009da6e0) Data frame received for 3\nI0410 21:39:17.079954 2232 log.go:172] (0xc00065b9a0) (3) Data frame handling\nI0410 21:39:17.079980 2232 log.go:172] (0xc0009da6e0) Data frame received for 5\nI0410 21:39:17.079988 2232 log.go:172] (0xc00096a280) (5) Data frame handling\nI0410 21:39:17.079997 2232 log.go:172] (0xc00096a280) (5) Data frame sent\nI0410 21:39:17.080006 2232 log.go:172] (0xc0009da6e0) Data frame received for 5\nI0410 21:39:17.080021 2232 log.go:172] (0xc00096a280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0410 21:39:17.081699 2232 log.go:172] (0xc0009da6e0) Data frame received for 1\nI0410 21:39:17.081722 2232 log.go:172] (0xc00096a140) (1) Data frame handling\nI0410 21:39:17.081743 2232 log.go:172] (0xc00096a140) (1) Data frame sent\nI0410 21:39:17.081758 2232 log.go:172] (0xc0009da6e0) (0xc00096a140) Stream removed, broadcasting: 1\nI0410 21:39:17.081805 2232 log.go:172] (0xc0009da6e0) Go away received\nI0410 21:39:17.082208 2232 log.go:172] (0xc0009da6e0) (0xc00096a140) Stream removed, broadcasting: 1\nI0410 21:39:17.082238 2232 log.go:172] (0xc0009da6e0) (0xc00065b9a0) Stream removed, broadcasting: 3\nI0410 21:39:17.082256 2232 log.go:172] (0xc0009da6e0) (0xc00096a280) Stream removed, broadcasting: 5\n" Apr 10 21:39:17.086: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:39:17.086: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 10 21:39:17.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:39:17.312: INFO: stderr: "I0410 21:39:17.229062 2252 log.go:172] (0xc0009c00b0) (0xc0006ffd60) Create stream\nI0410 21:39:17.229308 2252 log.go:172] (0xc0009c00b0) (0xc0006ffd60) Stream added, broadcasting: 1\nI0410 21:39:17.232208 2252 log.go:172] (0xc0009c00b0) Reply frame received for 1\nI0410 21:39:17.232255 2252 log.go:172] (0xc0009c00b0) (0xc00064c820) Create stream\nI0410 21:39:17.232269 2252 log.go:172] (0xc0009c00b0) (0xc00064c820) Stream added, broadcasting: 3\nI0410 21:39:17.233302 2252 log.go:172] (0xc0009c00b0) Reply frame received for 3\nI0410 21:39:17.233331 2252 log.go:172] (0xc0009c00b0) (0xc00074d5e0) Create stream\nI0410 21:39:17.233345 2252 log.go:172] (0xc0009c00b0) (0xc00074d5e0) Stream added, broadcasting: 5\nI0410 21:39:17.234434 2252 log.go:172] (0xc0009c00b0) Reply frame received for 5\nI0410 21:39:17.305583 2252 log.go:172] (0xc0009c00b0) Data frame received for 5\nI0410 21:39:17.305645 2252 log.go:172] (0xc00074d5e0) (5) Data frame handling\nI0410 21:39:17.305661 2252 log.go:172] (0xc00074d5e0) (5) Data frame sent\nI0410 21:39:17.305672 2252 log.go:172] (0xc0009c00b0) Data frame received for 5\nI0410 21:39:17.305681 2252 log.go:172] (0xc00074d5e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0410 21:39:17.305705 2252 log.go:172] (0xc0009c00b0) Data frame received for 3\nI0410 21:39:17.305722 2252 log.go:172] (0xc00064c820) (3) Data frame handling\nI0410 21:39:17.305754 2252 log.go:172] (0xc00064c820) (3) Data frame sent\nI0410 21:39:17.305764 2252 log.go:172] (0xc0009c00b0) Data frame received for 3\nI0410 21:39:17.305776 2252 log.go:172] (0xc00064c820) (3) Data frame handling\nI0410 21:39:17.307335 2252 log.go:172] (0xc0009c00b0) Data frame received for 1\nI0410 21:39:17.307367 2252 log.go:172] (0xc0006ffd60) (1) Data frame handling\nI0410 21:39:17.307389 2252 log.go:172] (0xc0006ffd60) (1) Data frame sent\nI0410 21:39:17.307429 2252 log.go:172] (0xc0009c00b0) (0xc0006ffd60) Stream removed, broadcasting: 1\nI0410 21:39:17.307586 2252 log.go:172] (0xc0009c00b0) Go away received\nI0410 21:39:17.307896 2252 log.go:172] (0xc0009c00b0) (0xc0006ffd60) Stream removed, broadcasting: 1\nI0410 21:39:17.307928 2252 log.go:172] (0xc0009c00b0) (0xc00064c820) Stream removed, broadcasting: 3\nI0410 21:39:17.307947 2252 log.go:172] (0xc0009c00b0) (0xc00074d5e0) Stream removed, broadcasting: 5\n" Apr 10 21:39:17.312: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:39:17.312: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 10 21:39:17.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:39:17.510: INFO: stderr: "I0410 21:39:17.442470 2272 log.go:172] (0xc0009d6630) (0xc000655ae0) Create stream\nI0410 21:39:17.442522 2272 log.go:172] (0xc0009d6630) (0xc000655ae0) Stream added, broadcasting: 1\nI0410 21:39:17.444853 2272 log.go:172] (0xc0009d6630) Reply frame received for 1\nI0410 21:39:17.444888 2272 log.go:172] (0xc0009d6630) (0xc000ade000) Create stream\nI0410 21:39:17.444911 2272 log.go:172] (0xc0009d6630) (0xc000ade000) Stream added, broadcasting: 3\nI0410 21:39:17.446219 2272 log.go:172] (0xc0009d6630) Reply frame received for 3\nI0410 21:39:17.446269 2272 log.go:172] (0xc0009d6630) (0xc000ade0a0) Create stream\nI0410 21:39:17.446294 2272 log.go:172] (0xc0009d6630) (0xc000ade0a0) Stream added, broadcasting: 5\nI0410 21:39:17.447387 2272 log.go:172] (0xc0009d6630) Reply frame received for 5\nI0410 21:39:17.504266 2272 log.go:172] (0xc0009d6630) Data frame received for 5\nI0410 21:39:17.504340 2272 log.go:172] (0xc0009d6630) Data frame received for 3\nI0410 21:39:17.504382 2272 log.go:172] (0xc000ade000) (3) Data frame handling\nI0410 21:39:17.504398 2272 log.go:172] (0xc000ade000) (3) Data frame sent\nI0410 21:39:17.504408 2272 log.go:172] (0xc0009d6630) Data frame received for 3\nI0410 21:39:17.504415 2272 log.go:172] (0xc000ade000) (3) Data frame handling\nI0410 21:39:17.504450 2272 log.go:172] (0xc000ade0a0) (5) Data frame handling\nI0410 21:39:17.504464 2272 log.go:172] (0xc000ade0a0) (5) Data frame sent\nI0410 21:39:17.504478 2272 log.go:172] (0xc0009d6630) Data frame received for 5\nI0410 21:39:17.504492 2272 log.go:172] (0xc000ade0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0410 21:39:17.506087 2272 log.go:172] (0xc0009d6630) Data frame received for 1\nI0410 21:39:17.506108 2272 log.go:172] (0xc000655ae0) (1) Data frame handling\nI0410 21:39:17.506124 2272 log.go:172] (0xc000655ae0) (1) Data frame sent\nI0410 21:39:17.506139 2272 log.go:172] (0xc0009d6630) (0xc000655ae0) Stream removed, broadcasting: 1\nI0410 21:39:17.506154 2272 log.go:172] (0xc0009d6630) Go away received\nI0410 21:39:17.506466 2272 log.go:172] (0xc0009d6630) (0xc000655ae0) Stream removed, broadcasting: 1\nI0410 21:39:17.506483 2272 log.go:172] (0xc0009d6630) (0xc000ade000) Stream removed, broadcasting: 3\nI0410 21:39:17.506493 2272 log.go:172] (0xc0009d6630) (0xc000ade0a0) Stream removed, broadcasting: 5\n" Apr 10 21:39:17.510: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:39:17.510: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 10 21:39:17.514: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 10 21:39:27.518: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:39:27.518: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:39:27.518: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 10 21:39:27.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:39:27.725: INFO: stderr: "I0410 21:39:27.652339 2294 log.go:172] (0xc00011b550) (0xc00063dcc0) Create stream\nI0410 21:39:27.652419 2294 log.go:172] (0xc00011b550) (0xc00063dcc0) Stream added, broadcasting: 1\nI0410 21:39:27.655494 2294 log.go:172] (0xc00011b550) Reply frame received for 1\nI0410 21:39:27.655544 2294 log.go:172] (0xc00011b550) (0xc0003fc640) Create stream\nI0410 21:39:27.655559 2294 log.go:172] (0xc00011b550) (0xc0003fc640) Stream added, broadcasting: 3\nI0410 21:39:27.656683 2294 log.go:172] (0xc00011b550) Reply frame received for 3\nI0410 21:39:27.656742 2294 log.go:172] (0xc00011b550) (0xc00063dd60) Create stream\nI0410 21:39:27.656777 2294 log.go:172] (0xc00011b550) (0xc00063dd60) Stream added, broadcasting: 5\nI0410 21:39:27.657884 2294 log.go:172] (0xc00011b550) Reply frame received for 5\nI0410 21:39:27.719483 2294 log.go:172] (0xc00011b550) Data frame received for 3\nI0410 21:39:27.719532 2294 log.go:172] (0xc0003fc640) (3) Data frame handling\nI0410 21:39:27.719546 2294 log.go:172] (0xc0003fc640) (3) Data frame sent\nI0410 21:39:27.719558 2294 log.go:172] (0xc00011b550) Data frame received for 3\nI0410 21:39:27.719569 2294 log.go:172] (0xc0003fc640) (3) Data frame handling\nI0410 21:39:27.719582 2294 log.go:172] (0xc00011b550) Data frame received for 5\nI0410 21:39:27.719592 2294 log.go:172] (0xc00063dd60) (5) Data frame handling\nI0410 21:39:27.719603 2294 log.go:172] (0xc00063dd60) (5) Data frame sent\nI0410 21:39:27.719614 2294 log.go:172] (0xc00011b550) Data frame received for 5\nI0410 21:39:27.719623 2294 log.go:172] (0xc00063dd60) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:39:27.721353 2294 log.go:172] (0xc00011b550) Data frame received for 1\nI0410 21:39:27.721376 2294 log.go:172] (0xc00063dcc0) (1) Data frame handling\nI0410 21:39:27.721388 2294 log.go:172] (0xc00063dcc0) (1) Data frame sent\nI0410 21:39:27.721401 2294 log.go:172] (0xc00011b550) (0xc00063dcc0) Stream removed, broadcasting: 1\nI0410 21:39:27.721477 2294 log.go:172] (0xc00011b550) Go away received\nI0410 21:39:27.721756 2294 log.go:172] (0xc00011b550) (0xc00063dcc0) Stream removed, broadcasting: 1\nI0410 21:39:27.721775 2294 log.go:172] (0xc00011b550) (0xc0003fc640) Stream removed, broadcasting: 3\nI0410 21:39:27.721787 2294 log.go:172] (0xc00011b550) (0xc00063dd60) Stream removed, broadcasting: 5\n" Apr 10 21:39:27.725: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:39:27.725: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:39:27.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:39:27.972: INFO: stderr: "I0410 21:39:27.854198 2316 log.go:172] (0xc0009ca000) (0xc0002bd4a0) Create stream\nI0410 21:39:27.854253 2316 log.go:172] (0xc0009ca000) (0xc0002bd4a0) Stream added, broadcasting: 1\nI0410 21:39:27.856538 2316 log.go:172] (0xc0009ca000) Reply frame received for 1\nI0410 21:39:27.856592 2316 log.go:172] (0xc0009ca000) (0xc00062dae0) Create stream\nI0410 21:39:27.856608 2316 log.go:172] (0xc0009ca000) (0xc00062dae0) Stream added, broadcasting: 3\nI0410 21:39:27.857649 2316 log.go:172] (0xc0009ca000) Reply frame received for 3\nI0410 21:39:27.857696 2316 log.go:172] (0xc0009ca000) (0xc000014000) Create stream\nI0410 21:39:27.857707 2316 log.go:172] (0xc0009ca000) (0xc000014000) Stream added, broadcasting: 5\nI0410 21:39:27.858584 2316 log.go:172] (0xc0009ca000) Reply frame received for 5\nI0410 21:39:27.920997 2316 log.go:172] (0xc0009ca000) Data frame received for 5\nI0410 21:39:27.921037 2316 log.go:172] (0xc000014000) (5) Data frame handling\nI0410 21:39:27.921068 2316 log.go:172] (0xc000014000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:39:27.964836 2316 log.go:172] (0xc0009ca000) Data frame received for 3\nI0410 21:39:27.964881 2316 log.go:172] (0xc00062dae0) (3) Data frame handling\nI0410 21:39:27.964916 2316 log.go:172] (0xc00062dae0) (3) Data frame sent\nI0410 21:39:27.964934 2316 log.go:172] (0xc0009ca000) Data frame received for 3\nI0410 21:39:27.964944 2316 log.go:172] (0xc00062dae0) (3) Data frame handling\nI0410 21:39:27.965083 2316 log.go:172] (0xc0009ca000) Data frame received for 5\nI0410 21:39:27.965103 2316 log.go:172] (0xc000014000) (5) Data frame handling\nI0410 21:39:27.966958 2316 log.go:172] (0xc0009ca000) Data frame received for 1\nI0410 21:39:27.966973 2316 log.go:172] (0xc0002bd4a0) (1) Data frame handling\nI0410 21:39:27.966998 2316 log.go:172] (0xc0002bd4a0) (1) Data frame sent\nI0410 21:39:27.967123 2316 log.go:172] (0xc0009ca000) (0xc0002bd4a0) Stream removed, broadcasting: 1\nI0410 21:39:27.967158 2316 log.go:172] (0xc0009ca000) Go away received\nI0410 21:39:27.967780 2316 log.go:172] (0xc0009ca000) (0xc0002bd4a0) Stream removed, broadcasting: 1\nI0410 21:39:27.967816 2316 log.go:172] (0xc0009ca000) (0xc00062dae0) Stream removed, broadcasting: 3\nI0410 21:39:27.967835 2316 log.go:172] (0xc0009ca000) (0xc000014000) Stream removed, broadcasting: 5\n" Apr 10 21:39:27.972: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:39:27.972: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:39:27.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:39:28.253: INFO: stderr: "I0410 21:39:28.129718 2337 log.go:172] (0xc0001131e0) (0xc0006c7ea0) Create stream\nI0410 21:39:28.129778 2337 log.go:172] (0xc0001131e0) (0xc0006c7ea0) Stream added, broadcasting: 1\nI0410 21:39:28.133024 2337 log.go:172] (0xc0001131e0) Reply frame received for 1\nI0410 21:39:28.133069 2337 log.go:172] (0xc0001131e0) (0xc0005a4780) Create stream\nI0410 21:39:28.133084 2337 log.go:172] (0xc0001131e0) (0xc0005a4780) Stream added, broadcasting: 3\nI0410 21:39:28.134232 2337 log.go:172] (0xc0001131e0) Reply frame received for 3\nI0410 21:39:28.134266 2337 log.go:172] (0xc0001131e0) (0xc0006c7f40) Create stream\nI0410 21:39:28.134276 2337 log.go:172] (0xc0001131e0) (0xc0006c7f40) Stream added, broadcasting: 5\nI0410 21:39:28.135314 2337 log.go:172] (0xc0001131e0) Reply frame received for 5\nI0410 21:39:28.201027 2337 log.go:172] (0xc0001131e0) Data frame received for 5\nI0410 21:39:28.201051 2337 log.go:172] (0xc0006c7f40) (5) Data frame handling\nI0410 21:39:28.201064 2337 log.go:172] (0xc0006c7f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:39:28.244299 2337 log.go:172] (0xc0001131e0) Data frame received for 3\nI0410 21:39:28.244332 2337 log.go:172] (0xc0005a4780) (3) Data frame handling\nI0410 21:39:28.244359 2337 log.go:172] (0xc0005a4780) (3) Data frame sent\nI0410 21:39:28.244380 2337 log.go:172] (0xc0001131e0) Data frame received for 3\nI0410 21:39:28.244393 2337 log.go:172] (0xc0005a4780) (3) Data frame handling\nI0410 21:39:28.244591 2337 log.go:172] (0xc0001131e0) Data frame received for 5\nI0410 21:39:28.244627 2337 log.go:172] (0xc0006c7f40) (5) Data frame handling\nI0410 21:39:28.247289 2337 log.go:172] (0xc0001131e0) Data frame received for 1\nI0410 21:39:28.247394 2337 log.go:172] (0xc0006c7ea0) (1) Data frame handling\nI0410 21:39:28.247448 2337 log.go:172] (0xc0006c7ea0) (1) Data frame sent\nI0410 21:39:28.247478 2337 log.go:172] (0xc0001131e0) (0xc0006c7ea0) Stream removed, broadcasting: 1\nI0410 21:39:28.247529 2337 log.go:172] (0xc0001131e0) Go away received\nI0410 21:39:28.247994 2337 log.go:172] (0xc0001131e0) (0xc0006c7ea0) Stream removed, broadcasting: 1\nI0410 21:39:28.248017 2337 log.go:172] (0xc0001131e0) (0xc0005a4780) Stream removed, broadcasting: 3\nI0410 21:39:28.248036 2337 log.go:172] (0xc0001131e0) (0xc0006c7f40) Stream removed, broadcasting: 5\n" Apr 10 21:39:28.253: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:39:28.253: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:39:28.253: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:39:28.286: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 10 21:39:38.294: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 10 21:39:38.294: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 10 21:39:38.294: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 10 21:39:38.331: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:38.331: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC }] Apr 10 21:39:38.331: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:38.331: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:38.331: INFO: Apr 10 21:39:38.331: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 10 21:39:39.335: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:39.335: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC }] Apr 10 21:39:39.335: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:39.335: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:39.335: INFO: Apr 10 21:39:39.335: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 10 21:39:40.360: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:40.360: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC }] Apr 10 21:39:40.360: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:40.360: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:40.361: INFO: Apr 10 21:39:40.361: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 10 21:39:41.416: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:41.416: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:38:46 +0000 UTC }] Apr 10 21:39:41.416: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:41.416: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:41.416: INFO: Apr 10 21:39:41.416: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 10 21:39:42.421: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:42.421: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:42.421: INFO: Apr 10 21:39:42.421: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 10 21:39:43.425: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:43.425: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:43.426: INFO: Apr 10 21:39:43.426: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 10 21:39:44.430: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:44.430: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:44.430: INFO: Apr 10 21:39:44.430: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 10 21:39:45.435: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:45.435: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:45.435: INFO: Apr 10 21:39:45.435: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 10 21:39:46.440: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:46.440: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:46.440: INFO: Apr 10 21:39:46.440: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 10 21:39:47.444: INFO: POD NODE PHASE GRACE CONDITIONS Apr 10 21:39:47.444: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-10 21:39:06 +0000 UTC }] Apr 10 21:39:47.445: INFO: Apr 10 21:39:47.445: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-814 Apr 10 21:39:48.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:39:48.581: INFO: rc: 1 Apr 10 21:39:48.581: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Apr 10 21:39:58.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:39:58.672: INFO: rc: 1 Apr 10 21:39:58.672: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:40:08.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:40:08.764: INFO: rc: 1 Apr 10 21:40:08.764: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:40:18.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:40:18.870: INFO: rc: 1 Apr 10 21:40:18.870: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:40:28.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:40:28.979: INFO: rc: 1 Apr 10 21:40:28.979: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:40:38.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:40:39.076: INFO: rc: 1 Apr 10 21:40:39.076: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:40:49.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:40:49.198: INFO: rc: 1 Apr 10 21:40:49.198: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:40:59.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:40:59.290: INFO: rc: 1 Apr 10 21:40:59.290: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:41:09.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:41:09.391: INFO: rc: 1 Apr 10 21:41:09.391: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:41:19.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:41:19.494: INFO: rc: 1 Apr 10 21:41:19.494: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:41:29.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:41:29.618: INFO: rc: 1 Apr 10 21:41:29.618: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:41:39.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:41:39.725: INFO: rc: 1 Apr 10 21:41:39.725: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:41:49.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:41:49.828: INFO: rc: 1 Apr 10 21:41:49.828: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:41:59.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:41:59.941: INFO: rc: 1 Apr 10 21:41:59.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:42:09.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:42:10.043: INFO: rc: 1 Apr 10 21:42:10.043: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:42:20.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:42:20.138: INFO: rc: 1 Apr 10 21:42:20.138: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:42:30.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:42:30.240: INFO: rc: 1 Apr 10 21:42:30.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:42:40.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:42:40.333: INFO: rc: 1 Apr 10 21:42:40.333: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:42:50.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:42:50.441: INFO: rc: 1 Apr 10 21:42:50.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:43:00.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:43:00.538: INFO: rc: 1 Apr 10 21:43:00.538: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:43:10.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:43:10.651: INFO: rc: 1 Apr 10 21:43:10.651: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:43:20.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:43:20.761: INFO: rc: 1 Apr 10 21:43:20.762: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:43:30.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:43:30.864: INFO: rc: 1 Apr 10 21:43:30.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:43:40.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:43:40.963: INFO: rc: 1 Apr 10 21:43:40.963: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:43:50.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:43:53.717: INFO: rc: 1 Apr 10 21:43:53.717: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:44:03.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:44:03.827: INFO: rc: 1 Apr 10 21:44:03.827: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:44:13.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:44:13.925: INFO: rc: 1 Apr 10 21:44:13.925: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:44:23.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:44:24.020: INFO: rc: 1 Apr 10 21:44:24.020: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:44:34.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:44:34.128: INFO: rc: 1 Apr 10 21:44:34.128: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:44:44.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:44:44.229: INFO: rc: 1 Apr 10 21:44:44.229: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 10 21:44:54.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-814 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:44:54.330: INFO: rc: 1 Apr 10 21:44:54.331: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Apr 10 21:44:54.331: INFO: Scaling statefulset ss to 0 Apr 10 21:44:54.340: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 10 21:44:54.342: INFO: Deleting all statefulset in ns statefulset-814 Apr 10 21:44:54.344: INFO: Scaling statefulset ss to 0 Apr 10 21:44:54.352: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:44:54.355: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:44:54.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-814" for this suite. • [SLOW TEST:368.090 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":132,"skipped":2093,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:44:54.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:45:00.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6263" for this suite. STEP: Destroying namespace "nsdeletetest-4581" for this suite. Apr 10 21:45:00.726: INFO: Namespace nsdeletetest-4581 was already deleted STEP: Destroying namespace "nsdeletetest-7133" for this suite. • [SLOW TEST:6.354 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":133,"skipped":2101,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:45:00.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 10 21:45:08.864: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 10 21:45:08.870: INFO: Pod pod-with-prestop-http-hook still exists Apr 10 21:45:10.870: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 10 21:45:10.877: INFO: Pod pod-with-prestop-http-hook still exists Apr 10 21:45:12.870: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 10 21:45:12.875: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:45:12.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-327" for this suite. • [SLOW TEST:12.177 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:45:12.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0410 21:45:14.030273 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 21:45:14.030: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:45:14.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-808" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":135,"skipped":2150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:45:14.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:45:14.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be539488-a7ef-4c49-9da3-dfd9997cee42" in namespace "projected-1794" to be "success or failure" Apr 10 21:45:14.140: INFO: Pod "downwardapi-volume-be539488-a7ef-4c49-9da3-dfd9997cee42": Phase="Pending", Reason="", readiness=false. Elapsed: 3.039815ms Apr 10 21:45:16.144: INFO: Pod "downwardapi-volume-be539488-a7ef-4c49-9da3-dfd9997cee42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007751154s Apr 10 21:45:18.243: INFO: Pod "downwardapi-volume-be539488-a7ef-4c49-9da3-dfd9997cee42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106299382s STEP: Saw pod success Apr 10 21:45:18.243: INFO: Pod "downwardapi-volume-be539488-a7ef-4c49-9da3-dfd9997cee42" satisfied condition "success or failure" Apr 10 21:45:18.290: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-be539488-a7ef-4c49-9da3-dfd9997cee42 container client-container: STEP: delete the pod Apr 10 21:45:18.404: INFO: Waiting for pod downwardapi-volume-be539488-a7ef-4c49-9da3-dfd9997cee42 to disappear Apr 10 21:45:18.409: INFO: Pod downwardapi-volume-be539488-a7ef-4c49-9da3-dfd9997cee42 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:45:18.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1794" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2194,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:45:18.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:45:18.517: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 10 21:45:18.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:18.668: INFO: Number of nodes with available pods: 0 Apr 10 21:45:18.668: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:45:19.672: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:19.676: INFO: Number of nodes with available pods: 0 Apr 10 21:45:19.676: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:45:20.672: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:20.675: INFO: Number of nodes with available pods: 0 Apr 10 21:45:20.675: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:45:21.673: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:21.677: INFO: Number of nodes with available pods: 0 Apr 10 21:45:21.677: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:45:22.673: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:22.677: INFO: Number of nodes with available pods: 2 Apr 10 21:45:22.677: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 10 21:45:22.713: INFO: Wrong image for pod: daemon-set-d74kt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:22.713: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:22.727: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:23.731: INFO: Wrong image for pod: daemon-set-d74kt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:23.731: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:23.734: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:24.733: INFO: Wrong image for pod: daemon-set-d74kt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:24.733: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:24.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:25.732: INFO: Wrong image for pod: daemon-set-d74kt. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:25.732: INFO: Pod daemon-set-d74kt is not available Apr 10 21:45:25.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:25.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:26.732: INFO: Pod daemon-set-6whzh is not available Apr 10 21:45:26.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:26.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:27.732: INFO: Pod daemon-set-6whzh is not available Apr 10 21:45:27.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:27.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:28.732: INFO: Pod daemon-set-6whzh is not available Apr 10 21:45:28.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:28.735: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:29.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:29.735: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:30.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:30.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:31.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:31.733: INFO: Pod daemon-set-knnsk is not available Apr 10 21:45:31.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:32.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:32.732: INFO: Pod daemon-set-knnsk is not available Apr 10 21:45:32.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:33.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:33.733: INFO: Pod daemon-set-knnsk is not available Apr 10 21:45:33.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:34.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:34.732: INFO: Pod daemon-set-knnsk is not available Apr 10 21:45:34.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:35.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:35.732: INFO: Pod daemon-set-knnsk is not available Apr 10 21:45:35.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:36.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:36.732: INFO: Pod daemon-set-knnsk is not available Apr 10 21:45:36.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:37.732: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:37.732: INFO: Pod daemon-set-knnsk is not available Apr 10 21:45:37.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:38.731: INFO: Wrong image for pod: daemon-set-knnsk. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 10 21:45:38.731: INFO: Pod daemon-set-knnsk is not available Apr 10 21:45:38.735: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:39.732: INFO: Pod daemon-set-9g4sx is not available Apr 10 21:45:39.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 10 21:45:39.741: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:39.744: INFO: Number of nodes with available pods: 1 Apr 10 21:45:39.744: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:45:40.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:40.752: INFO: Number of nodes with available pods: 1 Apr 10 21:45:40.752: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:45:41.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:41.754: INFO: Number of nodes with available pods: 1 Apr 10 21:45:41.754: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:45:42.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:45:42.754: INFO: Number of nodes with available pods: 2 Apr 10 21:45:42.754: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5912, will wait for the garbage collector to delete the pods Apr 10 21:45:42.825: INFO: Deleting DaemonSet.extensions daemon-set took: 6.237012ms Apr 10 21:45:43.125: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.217764ms Apr 10 21:45:49.528: INFO: Number of nodes with available pods: 0 Apr 10 21:45:49.528: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 21:45:49.530: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5912/daemonsets","resourceVersion":"7045504"},"items":null} Apr 10 21:45:49.533: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5912/pods","resourceVersion":"7045504"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:45:49.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5912" for this suite. • [SLOW TEST:31.134 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":137,"skipped":2194,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:45:49.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:45:49.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3411dcb2-7d36-405f-bba3-d1d79ede0ed3" in namespace "projected-2003" to be "success or failure" Apr 10 21:45:49.668: INFO: Pod "downwardapi-volume-3411dcb2-7d36-405f-bba3-d1d79ede0ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.148089ms Apr 10 21:45:51.692: INFO: Pod "downwardapi-volume-3411dcb2-7d36-405f-bba3-d1d79ede0ed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027394977s Apr 10 21:45:53.696: INFO: Pod "downwardapi-volume-3411dcb2-7d36-405f-bba3-d1d79ede0ed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031537058s STEP: Saw pod success Apr 10 21:45:53.696: INFO: Pod "downwardapi-volume-3411dcb2-7d36-405f-bba3-d1d79ede0ed3" satisfied condition "success or failure" Apr 10 21:45:53.700: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3411dcb2-7d36-405f-bba3-d1d79ede0ed3 container client-container: STEP: delete the pod Apr 10 21:45:53.717: INFO: Waiting for pod downwardapi-volume-3411dcb2-7d36-405f-bba3-d1d79ede0ed3 to disappear Apr 10 21:45:53.734: INFO: Pod downwardapi-volume-3411dcb2-7d36-405f-bba3-d1d79ede0ed3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:45:53.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2003" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2204,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:45:53.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 10 21:45:53.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9805' Apr 10 21:45:53.983: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 21:45:53.983: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 10 21:45:54.015: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-f66ds] Apr 10 21:45:54.015: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-f66ds" in namespace "kubectl-9805" to be "running and ready" Apr 10 21:45:54.018: INFO: Pod "e2e-test-httpd-rc-f66ds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.436836ms Apr 10 21:45:56.021: INFO: Pod "e2e-test-httpd-rc-f66ds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005846855s Apr 10 21:45:58.026: INFO: Pod "e2e-test-httpd-rc-f66ds": Phase="Running", Reason="", readiness=true. Elapsed: 4.010104937s Apr 10 21:45:58.026: INFO: Pod "e2e-test-httpd-rc-f66ds" satisfied condition "running and ready" Apr 10 21:45:58.026: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-f66ds] Apr 10 21:45:58.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9805' Apr 10 21:45:58.151: INFO: stderr: "" Apr 10 21:45:58.151: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.116. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.116. Set the 'ServerName' directive globally to suppress this message\n[Fri Apr 10 21:45:56.233583 2020] [mpm_event:notice] [pid 1:tid 140109113396072] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Apr 10 21:45:56.233632 2020] [core:notice] [pid 1:tid 140109113396072] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Apr 10 21:45:58.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9805' Apr 10 21:45:58.259: INFO: stderr: "" Apr 10 21:45:58.259: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:45:58.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9805" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":139,"skipped":2208,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:45:58.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 10 21:45:58.305: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 10 21:45:59.263: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 10 21:46:01.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151959, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151959, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151959, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722151959, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:46:03.987: INFO: Waited 628.329699ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:46:04.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4576" for this suite. • [SLOW TEST:6.238 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":140,"skipped":2215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:46:04.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 10 21:46:04.743: INFO: Waiting up to 5m0s for pod "downward-api-664c7767-d53f-466d-89d5-73e4a8243000" in namespace "downward-api-4059" to be "success or failure" Apr 10 21:46:04.818: INFO: Pod "downward-api-664c7767-d53f-466d-89d5-73e4a8243000": Phase="Pending", Reason="", readiness=false. Elapsed: 74.462022ms Apr 10 21:46:06.822: INFO: Pod "downward-api-664c7767-d53f-466d-89d5-73e4a8243000": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0785158s Apr 10 21:46:08.826: INFO: Pod "downward-api-664c7767-d53f-466d-89d5-73e4a8243000": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082495557s STEP: Saw pod success Apr 10 21:46:08.826: INFO: Pod "downward-api-664c7767-d53f-466d-89d5-73e4a8243000" satisfied condition "success or failure" Apr 10 21:46:08.829: INFO: Trying to get logs from node jerma-worker2 pod downward-api-664c7767-d53f-466d-89d5-73e4a8243000 container dapi-container: STEP: delete the pod Apr 10 21:46:08.850: INFO: Waiting for pod downward-api-664c7767-d53f-466d-89d5-73e4a8243000 to disappear Apr 10 21:46:08.902: INFO: Pod downward-api-664c7767-d53f-466d-89d5-73e4a8243000 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:46:08.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4059" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:46:08.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:46:08.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8075" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":142,"skipped":2277,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:46:09.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-7f2d4097-659e-4233-88b5-3af7424eea83 STEP: Creating configMap with name cm-test-opt-upd-38529194-6963-4acf-874a-e3ce3c8b1508 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7f2d4097-659e-4233-88b5-3af7424eea83 STEP: Updating configmap cm-test-opt-upd-38529194-6963-4acf-874a-e3ce3c8b1508 STEP: Creating configMap with name cm-test-opt-create-9c092c16-c1c3-4681-93d0-2ca520b39021 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:47:41.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5544" for this suite. • [SLOW TEST:92.582 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2285,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:47:41.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:47:41.958: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:47:43.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152061, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152061, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152062, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152061, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:47:45.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152061, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152061, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152062, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152061, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:47:49.170: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:47:49.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4065" for this suite. STEP: Destroying namespace "webhook-4065-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.779 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":144,"skipped":2294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:47:49.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:47:54.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5280" for this suite. • [SLOW TEST:5.556 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":145,"skipped":2317,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:47:54.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:47:55.036: INFO: Creating deployment "test-recreate-deployment" Apr 10 21:47:55.041: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 10 21:47:55.078: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 10 21:47:57.084: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 10 21:47:57.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152075, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152075, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152075, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152075, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:47:59.095: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 10 21:47:59.102: INFO: Updating deployment test-recreate-deployment Apr 10 21:47:59.102: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 10 21:47:59.458: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8929 /apis/apps/v1/namespaces/deployment-8929/deployments/test-recreate-deployment eebed966-2c4d-480e-8bd2-855e5dbacbe2 7046339 2 2020-04-10 21:47:55 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045944d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-10 21:47:59 +0000 UTC,LastTransitionTime:2020-04-10 21:47:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-10 21:47:59 +0000 UTC,LastTransitionTime:2020-04-10 21:47:55 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 10 21:47:59.605: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-8929 /apis/apps/v1/namespaces/deployment-8929/replicasets/test-recreate-deployment-5f94c574ff 34a6aa57-b610-4d49-a537-59f84a3998f3 7046337 1 2020-04-10 21:47:59 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment eebed966-2c4d-480e-8bd2-855e5dbacbe2 0xc004594857 0xc004594858}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045948b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:47:59.605: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 10 21:47:59.606: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-8929 /apis/apps/v1/namespaces/deployment-8929/replicasets/test-recreate-deployment-799c574856 0beab211-725d-47a7-ae7f-8eb2f1d5e08b 7046328 2 2020-04-10 21:47:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment eebed966-2c4d-480e-8bd2-855e5dbacbe2 0xc004594927 0xc004594928}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004594998 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:47:59.609: INFO: Pod "test-recreate-deployment-5f94c574ff-872zd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-872zd test-recreate-deployment-5f94c574ff- deployment-8929 /api/v1/namespaces/deployment-8929/pods/test-recreate-deployment-5f94c574ff-872zd b22e50e7-af63-4fb5-a558-b98201860551 7046340 0 2020-04-10 21:47:59 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 34a6aa57-b610-4d49-a537-59f84a3998f3 0xc004594de7 0xc004594de8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjbqx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjbqx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjbqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:47:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:47:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:47:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:47:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-10 21:47:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:47:59.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8929" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":146,"skipped":2321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:47:59.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:47:59.736: INFO: Create a RollingUpdate DaemonSet Apr 10 21:47:59.739: INFO: Check that daemon pods launch on every node of the cluster Apr 10 21:47:59.757: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:47:59.767: INFO: Number of nodes with available pods: 0 Apr 10 21:47:59.767: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:48:00.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:48:00.775: INFO: Number of nodes with available pods: 0 Apr 10 21:48:00.775: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:48:01.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:48:01.775: INFO: Number of nodes with available pods: 0 Apr 10 21:48:01.775: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:48:02.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:48:02.775: INFO: Number of nodes with available pods: 0 Apr 10 21:48:02.775: INFO: Node jerma-worker is running more than one daemon pod Apr 10 21:48:03.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:48:03.775: INFO: Number of nodes with available pods: 2 Apr 10 21:48:03.775: INFO: Number of running nodes: 2, number of available pods: 2 Apr 10 21:48:03.775: INFO: Update the DaemonSet to trigger a rollout Apr 10 21:48:03.782: INFO: Updating DaemonSet daemon-set Apr 10 21:48:19.802: INFO: Roll back the DaemonSet before rollout is complete Apr 10 21:48:19.808: INFO: Updating DaemonSet daemon-set Apr 10 21:48:19.808: INFO: Make sure DaemonSet rollback is complete Apr 10 21:48:19.815: INFO: Wrong image for pod: daemon-set-qb85s. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 10 21:48:19.815: INFO: Pod daemon-set-qb85s is not available Apr 10 21:48:19.836: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:48:20.856: INFO: Wrong image for pod: daemon-set-qb85s. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 10 21:48:20.856: INFO: Pod daemon-set-qb85s is not available Apr 10 21:48:20.860: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:48:21.841: INFO: Wrong image for pod: daemon-set-qb85s. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 10 21:48:21.841: INFO: Pod daemon-set-qb85s is not available Apr 10 21:48:21.846: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 10 21:48:22.839: INFO: Pod daemon-set-mslc9 is not available Apr 10 21:48:22.842: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6826, will wait for the garbage collector to delete the pods Apr 10 21:48:22.907: INFO: Deleting DaemonSet.extensions daemon-set took: 7.411487ms Apr 10 21:48:23.307: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.289906ms Apr 10 21:48:29.514: INFO: Number of nodes with available pods: 0 Apr 10 21:48:29.514: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 21:48:29.516: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6826/daemonsets","resourceVersion":"7046535"},"items":null} Apr 10 21:48:29.518: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6826/pods","resourceVersion":"7046535"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:48:29.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6826" for this suite. • [SLOW TEST:29.897 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":147,"skipped":2357,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:48:29.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:48:29.591: INFO: Waiting up to 5m0s for pod "downwardapi-volume-baceb491-ae2e-4ffe-90d0-47b8e94282e8" in namespace "projected-4857" to be "success or failure" Apr 10 21:48:29.604: INFO: Pod "downwardapi-volume-baceb491-ae2e-4ffe-90d0-47b8e94282e8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.555928ms Apr 10 21:48:31.628: INFO: Pod "downwardapi-volume-baceb491-ae2e-4ffe-90d0-47b8e94282e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037569516s Apr 10 21:48:33.633: INFO: Pod "downwardapi-volume-baceb491-ae2e-4ffe-90d0-47b8e94282e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042174932s STEP: Saw pod success Apr 10 21:48:33.633: INFO: Pod "downwardapi-volume-baceb491-ae2e-4ffe-90d0-47b8e94282e8" satisfied condition "success or failure" Apr 10 21:48:33.636: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-baceb491-ae2e-4ffe-90d0-47b8e94282e8 container client-container: STEP: delete the pod Apr 10 21:48:33.657: INFO: Waiting for pod downwardapi-volume-baceb491-ae2e-4ffe-90d0-47b8e94282e8 to disappear Apr 10 21:48:33.660: INFO: Pod downwardapi-volume-baceb491-ae2e-4ffe-90d0-47b8e94282e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:48:33.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4857" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2363,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:48:33.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 10 21:48:33.747: INFO: Waiting up to 5m0s for pod "pod-6aad173b-9647-4ab0-8d66-828fd1810d34" in namespace "emptydir-9836" to be "success or failure" Apr 10 21:48:33.766: INFO: Pod "pod-6aad173b-9647-4ab0-8d66-828fd1810d34": Phase="Pending", Reason="", readiness=false. Elapsed: 19.773929ms Apr 10 21:48:35.770: INFO: Pod "pod-6aad173b-9647-4ab0-8d66-828fd1810d34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023831465s Apr 10 21:48:37.774: INFO: Pod "pod-6aad173b-9647-4ab0-8d66-828fd1810d34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027571442s STEP: Saw pod success Apr 10 21:48:37.774: INFO: Pod "pod-6aad173b-9647-4ab0-8d66-828fd1810d34" satisfied condition "success or failure" Apr 10 21:48:37.777: INFO: Trying to get logs from node jerma-worker2 pod pod-6aad173b-9647-4ab0-8d66-828fd1810d34 container test-container: STEP: delete the pod Apr 10 21:48:37.811: INFO: Waiting for pod pod-6aad173b-9647-4ab0-8d66-828fd1810d34 to disappear Apr 10 21:48:37.821: INFO: Pod pod-6aad173b-9647-4ab0-8d66-828fd1810d34 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:48:37.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9836" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2365,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:48:37.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-4de87247-55da-4743-aba4-7bcda91aa419 STEP: Creating a pod to test consume configMaps Apr 10 21:48:37.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-59785388-0cd9-4f6c-b71f-bfda7129049c" in namespace "configmap-2883" to be "success or failure" Apr 10 21:48:37.941: INFO: Pod "pod-configmaps-59785388-0cd9-4f6c-b71f-bfda7129049c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.860495ms Apr 10 21:48:39.945: INFO: Pod "pod-configmaps-59785388-0cd9-4f6c-b71f-bfda7129049c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007384429s Apr 10 21:48:41.949: INFO: Pod "pod-configmaps-59785388-0cd9-4f6c-b71f-bfda7129049c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011226476s STEP: Saw pod success Apr 10 21:48:41.949: INFO: Pod "pod-configmaps-59785388-0cd9-4f6c-b71f-bfda7129049c" satisfied condition "success or failure" Apr 10 21:48:41.951: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-59785388-0cd9-4f6c-b71f-bfda7129049c container configmap-volume-test: STEP: delete the pod Apr 10 21:48:42.037: INFO: Waiting for pod pod-configmaps-59785388-0cd9-4f6c-b71f-bfda7129049c to disappear Apr 10 21:48:42.043: INFO: Pod pod-configmaps-59785388-0cd9-4f6c-b71f-bfda7129049c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:48:42.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2883" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2368,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:48:42.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 10 21:48:48.712: INFO: Successfully updated pod "adopt-release-dhssz" STEP: Checking that the Job readopts the Pod Apr 10 21:48:48.712: INFO: Waiting up to 15m0s for pod "adopt-release-dhssz" in namespace "job-7626" to be "adopted" Apr 10 21:48:48.715: INFO: Pod "adopt-release-dhssz": Phase="Running", Reason="", readiness=true. Elapsed: 2.802009ms Apr 10 21:48:50.718: INFO: Pod "adopt-release-dhssz": Phase="Running", Reason="", readiness=true. Elapsed: 2.006126013s Apr 10 21:48:50.718: INFO: Pod "adopt-release-dhssz" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 10 21:48:51.227: INFO: Successfully updated pod "adopt-release-dhssz" STEP: Checking that the Job releases the Pod Apr 10 21:48:51.227: INFO: Waiting up to 15m0s for pod "adopt-release-dhssz" in namespace "job-7626" to be "released" Apr 10 21:48:51.236: INFO: Pod "adopt-release-dhssz": Phase="Running", Reason="", readiness=true. Elapsed: 8.814017ms Apr 10 21:48:53.241: INFO: Pod "adopt-release-dhssz": Phase="Running", Reason="", readiness=true. Elapsed: 2.013751782s Apr 10 21:48:53.241: INFO: Pod "adopt-release-dhssz" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:48:53.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7626" for this suite. • [SLOW TEST:11.199 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":151,"skipped":2369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:48:53.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0410 21:49:03.351703 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 21:49:03.351: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:03.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6021" for this suite. • [SLOW TEST:10.110 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":152,"skipped":2392,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:03.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:49:03.439: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 10 21:49:08.445: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 10 21:49:08.445: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 10 21:49:10.449: INFO: Creating deployment "test-rollover-deployment" Apr 10 21:49:10.470: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 10 21:49:12.477: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 10 21:49:12.483: INFO: Ensure that both replica sets have 1 created replica Apr 10 21:49:12.490: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 10 21:49:12.495: INFO: Updating deployment test-rollover-deployment Apr 10 21:49:12.495: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 10 21:49:14.516: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 10 21:49:14.523: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 10 21:49:14.529: INFO: all replica sets need to contain the pod-template-hash label Apr 10 21:49:14.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152152, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:49:16.536: INFO: all replica sets need to contain the pod-template-hash label Apr 10 21:49:16.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152155, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:49:18.536: INFO: all replica sets need to contain the pod-template-hash label Apr 10 21:49:18.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152155, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:49:20.537: INFO: all replica sets need to contain the pod-template-hash label Apr 10 21:49:20.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152155, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:49:22.538: INFO: all replica sets need to contain the pod-template-hash label Apr 10 21:49:22.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152155, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:49:24.537: INFO: all replica sets need to contain the pod-template-hash label Apr 10 21:49:24.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152155, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152150, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 21:49:26.537: INFO: Apr 10 21:49:26.537: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 10 21:49:26.545: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6383 /apis/apps/v1/namespaces/deployment-6383/deployments/test-rollover-deployment bdbedbf7-ce2f-4537-aebb-81b9a3457b9a 7046987 2 2020-04-10 21:49:10 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003355da8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-10 21:49:10 +0000 UTC,LastTransitionTime:2020-04-10 21:49:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-10 21:49:25 +0000 UTC,LastTransitionTime:2020-04-10 21:49:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 10 21:49:26.548: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6383 /apis/apps/v1/namespaces/deployment-6383/replicasets/test-rollover-deployment-574d6dfbff 7169b051-6acd-4474-aa45-8ffcc9bbf37f 7046976 2 2020-04-10 21:49:12 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment bdbedbf7-ce2f-4537-aebb-81b9a3457b9a 0xc00438f9d7 0xc00438f9d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00438fa48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:49:26.548: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 10 21:49:26.548: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6383 /apis/apps/v1/namespaces/deployment-6383/replicasets/test-rollover-controller dbf61b98-2191-4f63-8a74-69d102068fce 7046985 2 2020-04-10 21:49:03 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment bdbedbf7-ce2f-4537-aebb-81b9a3457b9a 0xc00438f907 0xc00438f908}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00438f968 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:49:26.548: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6383 /apis/apps/v1/namespaces/deployment-6383/replicasets/test-rollover-deployment-f6c94f66c 7c4bec18-44c8-4696-a54f-8983ec7f60b5 7046931 2 2020-04-10 21:49:10 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment bdbedbf7-ce2f-4537-aebb-81b9a3457b9a 0xc00438fab0 0xc00438fab1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00438fb28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:49:26.552: INFO: Pod "test-rollover-deployment-574d6dfbff-rrlzh" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-rrlzh test-rollover-deployment-574d6dfbff- deployment-6383 /api/v1/namespaces/deployment-6383/pods/test-rollover-deployment-574d6dfbff-rrlzh cbeb156e-b67d-419b-a754-c094476cf8a0 7046944 0 2020-04-10 21:49:12 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 7169b051-6acd-4474-aa45-8ffcc9bbf37f 0xc004c6a157 0xc004c6a158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nnb6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nnb6g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nnb6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:49:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:49:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:49:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:49:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.237,StartTime:2020-04-10 21:49:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:49:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://64002c80d0a446ecc49e4868e78bb839eeb6206a6209d1d5e42139b92ff97619,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:26.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6383" for this suite. • [SLOW TEST:23.199 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":153,"skipped":2393,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:26.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:49:26.658: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 10 21:49:31.662: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 10 21:49:31.662: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 10 21:49:35.775: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9394 /apis/apps/v1/namespaces/deployment-9394/deployments/test-cleanup-deployment 0bbca464-5781-4125-8fcf-df4db509555c 7047101 1 2020-04-10 21:49:31 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038f5a28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-10 21:49:31 +0000 UTC,LastTransitionTime:2020-04-10 21:49:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-04-10 21:49:35 +0000 UTC,LastTransitionTime:2020-04-10 21:49:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 10 21:49:35.779: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-9394 /apis/apps/v1/namespaces/deployment-9394/replicasets/test-cleanup-deployment-55ffc6b7b6 11357637-fafd-4a13-bb0c-dcd62b986c60 7047090 1 2020-04-10 21:49:31 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 0bbca464-5781-4125-8fcf-df4db509555c 0xc004c6bf57 0xc004c6bf58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c6bfc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 10 21:49:35.782: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-q4z76" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-q4z76 test-cleanup-deployment-55ffc6b7b6- deployment-9394 /api/v1/namespaces/deployment-9394/pods/test-cleanup-deployment-55ffc6b7b6-q4z76 35ce0614-38f4-4d25-a6da-59f1fbfcdfa9 7047089 0 2020-04-10 21:49:31 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 11357637-fafd-4a13-bb0c-dcd62b986c60 0xc0004d88a7 0xc0004d88a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2cjtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2cjtx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2cjtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:49:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:49:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:49:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-10 21:49:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.127,StartTime:2020-04-10 21:49:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-10 21:49:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a8010111dc592f1e7c9884f315b601a57eeb5a7d860af88d41b240b564d76635,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:35.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9394" for this suite. • [SLOW TEST:9.230 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":154,"skipped":2414,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:35.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 10 21:49:35.913: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:35.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8138" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":155,"skipped":2416,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:36.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 10 21:49:40.244: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:40.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7977" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2418,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:40.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-2edf7568-12b9-4802-b514-48b79d297de9 STEP: Creating a pod to test consume configMaps Apr 10 21:49:40.382: INFO: Waiting up to 5m0s for pod "pod-configmaps-60042136-f163-4982-946f-262b114f177e" in namespace "configmap-7316" to be "success or failure" Apr 10 21:49:40.386: INFO: Pod "pod-configmaps-60042136-f163-4982-946f-262b114f177e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.424022ms Apr 10 21:49:42.402: INFO: Pod "pod-configmaps-60042136-f163-4982-946f-262b114f177e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019125849s Apr 10 21:49:44.406: INFO: Pod "pod-configmaps-60042136-f163-4982-946f-262b114f177e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023246576s STEP: Saw pod success Apr 10 21:49:44.406: INFO: Pod "pod-configmaps-60042136-f163-4982-946f-262b114f177e" satisfied condition "success or failure" Apr 10 21:49:44.408: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-60042136-f163-4982-946f-262b114f177e container configmap-volume-test: STEP: delete the pod Apr 10 21:49:44.454: INFO: Waiting for pod pod-configmaps-60042136-f163-4982-946f-262b114f177e to disappear Apr 10 21:49:44.618: INFO: Pod pod-configmaps-60042136-f163-4982-946f-262b114f177e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:44.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7316" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2419,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:44.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:48.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5564" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2421,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:48.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 10 21:49:53.513: INFO: Successfully updated pod "annotationupdate46edbe1e-59f2-481d-8ea5-5c647aa5f9f2" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:55.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5045" for this suite. • [SLOW TEST:6.670 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2422,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:55.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 10 21:49:55.628: INFO: Waiting up to 5m0s for pod "var-expansion-f2de6077-f8af-4047-9c08-44d795f9d453" in namespace "var-expansion-3081" to be "success or failure" Apr 10 21:49:55.632: INFO: Pod "var-expansion-f2de6077-f8af-4047-9c08-44d795f9d453": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674929ms Apr 10 21:49:57.635: INFO: Pod "var-expansion-f2de6077-f8af-4047-9c08-44d795f9d453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007589801s Apr 10 21:49:59.640: INFO: Pod "var-expansion-f2de6077-f8af-4047-9c08-44d795f9d453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011745278s STEP: Saw pod success Apr 10 21:49:59.640: INFO: Pod "var-expansion-f2de6077-f8af-4047-9c08-44d795f9d453" satisfied condition "success or failure" Apr 10 21:49:59.643: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-f2de6077-f8af-4047-9c08-44d795f9d453 container dapi-container: STEP: delete the pod Apr 10 21:49:59.663: INFO: Waiting for pod var-expansion-f2de6077-f8af-4047-9c08-44d795f9d453 to disappear Apr 10 21:49:59.667: INFO: Pod var-expansion-f2de6077-f8af-4047-9c08-44d795f9d453 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:49:59.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3081" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2424,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:49:59.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-5a348810-2116-4f19-9c90-9519c1617dcf in namespace container-probe-264 Apr 10 21:50:03.783: INFO: Started pod busybox-5a348810-2116-4f19-9c90-9519c1617dcf in namespace container-probe-264 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 21:50:03.786: INFO: Initial restart count of pod busybox-5a348810-2116-4f19-9c90-9519c1617dcf is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:54:04.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-264" for this suite. • [SLOW TEST:245.031 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:54:04.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 21:54:05.190: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 21:54:07.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152445, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152445, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152445, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722152445, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 21:54:10.293: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:54:10.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4756" for this suite. STEP: Destroying namespace "webhook-4756-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.210 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":162,"skipped":2468,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:54:10.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:54:10.993: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d734ac09-26d9-48ce-b6d9-4d21aa94c8da" in namespace "security-context-test-2539" to be "success or failure" Apr 10 21:54:11.005: INFO: Pod "busybox-user-65534-d734ac09-26d9-48ce-b6d9-4d21aa94c8da": Phase="Pending", Reason="", readiness=false. Elapsed: 11.072072ms Apr 10 21:54:13.010: INFO: Pod "busybox-user-65534-d734ac09-26d9-48ce-b6d9-4d21aa94c8da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016118868s Apr 10 21:54:15.013: INFO: Pod "busybox-user-65534-d734ac09-26d9-48ce-b6d9-4d21aa94c8da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019755834s Apr 10 21:54:15.013: INFO: Pod "busybox-user-65534-d734ac09-26d9-48ce-b6d9-4d21aa94c8da" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:54:15.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2539" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2482,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:54:15.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:54:43.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5407" for this suite. • [SLOW TEST:28.772 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2485,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:54:43.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 10 21:54:43.920: INFO: Waiting up to 5m0s for pod "var-expansion-297ab0b9-ade7-4d57-aed0-81387789e144" in namespace "var-expansion-2204" to be "success or failure" Apr 10 21:54:43.924: INFO: Pod "var-expansion-297ab0b9-ade7-4d57-aed0-81387789e144": Phase="Pending", Reason="", readiness=false. Elapsed: 3.341279ms Apr 10 21:54:45.981: INFO: Pod "var-expansion-297ab0b9-ade7-4d57-aed0-81387789e144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060952208s Apr 10 21:54:47.986: INFO: Pod "var-expansion-297ab0b9-ade7-4d57-aed0-81387789e144": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065148661s STEP: Saw pod success Apr 10 21:54:47.986: INFO: Pod "var-expansion-297ab0b9-ade7-4d57-aed0-81387789e144" satisfied condition "success or failure" Apr 10 21:54:47.989: INFO: Trying to get logs from node jerma-worker pod var-expansion-297ab0b9-ade7-4d57-aed0-81387789e144 container dapi-container: STEP: delete the pod Apr 10 21:54:48.021: INFO: Waiting for pod var-expansion-297ab0b9-ade7-4d57-aed0-81387789e144 to disappear Apr 10 21:54:48.031: INFO: Pod var-expansion-297ab0b9-ade7-4d57-aed0-81387789e144 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:54:48.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2204" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:54:48.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:55:01.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9148" for this suite. • [SLOW TEST:13.234 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":166,"skipped":2554,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:55:01.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-42c57f6d-4e09-402e-ad5f-4fa210c035d8 STEP: Creating a pod to test consume secrets Apr 10 21:55:01.364: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71f54c9d-e29d-46bf-88d2-583d75dd87bd" in namespace "projected-8761" to be "success or failure" Apr 10 21:55:01.371: INFO: Pod "pod-projected-secrets-71f54c9d-e29d-46bf-88d2-583d75dd87bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412393ms Apr 10 21:55:03.407: INFO: Pod "pod-projected-secrets-71f54c9d-e29d-46bf-88d2-583d75dd87bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042280167s Apr 10 21:55:05.411: INFO: Pod "pod-projected-secrets-71f54c9d-e29d-46bf-88d2-583d75dd87bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046657505s STEP: Saw pod success Apr 10 21:55:05.411: INFO: Pod "pod-projected-secrets-71f54c9d-e29d-46bf-88d2-583d75dd87bd" satisfied condition "success or failure" Apr 10 21:55:05.414: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-71f54c9d-e29d-46bf-88d2-583d75dd87bd container projected-secret-volume-test: STEP: delete the pod Apr 10 21:55:05.432: INFO: Waiting for pod pod-projected-secrets-71f54c9d-e29d-46bf-88d2-583d75dd87bd to disappear Apr 10 21:55:05.437: INFO: Pod pod-projected-secrets-71f54c9d-e29d-46bf-88d2-583d75dd87bd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:55:05.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8761" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2564,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:55:05.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9572 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 10 21:55:05.557: INFO: Found 0 stateful pods, waiting for 3 Apr 10 21:55:15.561: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:55:15.561: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:55:15.561: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 10 21:55:15.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9572 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:55:18.497: INFO: stderr: "I0410 21:55:18.395941 3084 log.go:172] (0xc000880bb0) (0xc000518780) Create stream\nI0410 21:55:18.395988 3084 log.go:172] (0xc000880bb0) (0xc000518780) Stream added, broadcasting: 1\nI0410 21:55:18.399311 3084 log.go:172] (0xc000880bb0) Reply frame received for 1\nI0410 21:55:18.399361 3084 log.go:172] (0xc000880bb0) (0xc0007c2b40) Create stream\nI0410 21:55:18.399371 3084 log.go:172] (0xc000880bb0) (0xc0007c2b40) Stream added, broadcasting: 3\nI0410 21:55:18.400438 3084 log.go:172] (0xc000880bb0) Reply frame received for 3\nI0410 21:55:18.400473 3084 log.go:172] (0xc000880bb0) (0xc0007ae000) Create stream\nI0410 21:55:18.400480 3084 log.go:172] (0xc000880bb0) (0xc0007ae000) Stream added, broadcasting: 5\nI0410 21:55:18.401605 3084 log.go:172] (0xc000880bb0) Reply frame received for 5\nI0410 21:55:18.462270 3084 log.go:172] (0xc000880bb0) Data frame received for 5\nI0410 21:55:18.462299 3084 log.go:172] (0xc0007ae000) (5) Data frame handling\nI0410 21:55:18.462317 3084 log.go:172] (0xc0007ae000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:55:18.490474 3084 log.go:172] (0xc000880bb0) Data frame received for 3\nI0410 21:55:18.490500 3084 log.go:172] (0xc0007c2b40) (3) Data frame handling\nI0410 21:55:18.490509 3084 log.go:172] (0xc0007c2b40) (3) Data frame sent\nI0410 21:55:18.490516 3084 log.go:172] (0xc000880bb0) Data frame received for 3\nI0410 21:55:18.490522 3084 log.go:172] (0xc0007c2b40) (3) Data frame handling\nI0410 21:55:18.490543 3084 log.go:172] (0xc000880bb0) Data frame received for 5\nI0410 21:55:18.490561 3084 log.go:172] (0xc0007ae000) (5) Data frame handling\nI0410 21:55:18.492317 3084 log.go:172] (0xc000880bb0) Data frame received for 1\nI0410 21:55:18.492330 3084 log.go:172] (0xc000518780) (1) Data frame handling\nI0410 21:55:18.492335 3084 log.go:172] (0xc000518780) (1) Data frame sent\nI0410 21:55:18.492351 3084 log.go:172] (0xc000880bb0) (0xc000518780) Stream removed, broadcasting: 1\nI0410 21:55:18.492401 3084 log.go:172] (0xc000880bb0) Go away received\nI0410 21:55:18.492579 3084 log.go:172] (0xc000880bb0) (0xc000518780) Stream removed, broadcasting: 1\nI0410 21:55:18.492590 3084 log.go:172] (0xc000880bb0) (0xc0007c2b40) Stream removed, broadcasting: 3\nI0410 21:55:18.492600 3084 log.go:172] (0xc000880bb0) (0xc0007ae000) Stream removed, broadcasting: 5\n" Apr 10 21:55:18.497: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:55:18.497: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 10 21:55:28.567: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 10 21:55:38.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9572 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:55:38.881: INFO: stderr: "I0410 21:55:38.773798 3116 log.go:172] (0xc000b13ad0) (0xc000a9a640) Create stream\nI0410 21:55:38.773852 3116 log.go:172] (0xc000b13ad0) (0xc000a9a640) Stream added, broadcasting: 1\nI0410 21:55:38.778558 3116 log.go:172] (0xc000b13ad0) Reply frame received for 1\nI0410 21:55:38.778608 3116 log.go:172] (0xc000b13ad0) (0xc0006f4780) Create stream\nI0410 21:55:38.778626 3116 log.go:172] (0xc000b13ad0) (0xc0006f4780) Stream added, broadcasting: 3\nI0410 21:55:38.779633 3116 log.go:172] (0xc000b13ad0) Reply frame received for 3\nI0410 21:55:38.779678 3116 log.go:172] (0xc000b13ad0) (0xc000595540) Create stream\nI0410 21:55:38.779691 3116 log.go:172] (0xc000b13ad0) (0xc000595540) Stream added, broadcasting: 5\nI0410 21:55:38.780555 3116 log.go:172] (0xc000b13ad0) Reply frame received for 5\nI0410 21:55:38.874019 3116 log.go:172] (0xc000b13ad0) Data frame received for 5\nI0410 21:55:38.874058 3116 log.go:172] (0xc000595540) (5) Data frame handling\nI0410 21:55:38.874075 3116 log.go:172] (0xc000595540) (5) Data frame sent\nI0410 21:55:38.874093 3116 log.go:172] (0xc000b13ad0) Data frame received for 5\nI0410 21:55:38.874110 3116 log.go:172] (0xc000595540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0410 21:55:38.874152 3116 log.go:172] (0xc000b13ad0) Data frame received for 3\nI0410 21:55:38.874167 3116 log.go:172] (0xc0006f4780) (3) Data frame handling\nI0410 21:55:38.874187 3116 log.go:172] (0xc0006f4780) (3) Data frame sent\nI0410 21:55:38.874206 3116 log.go:172] (0xc000b13ad0) Data frame received for 3\nI0410 21:55:38.874226 3116 log.go:172] (0xc0006f4780) (3) Data frame handling\nI0410 21:55:38.875840 3116 log.go:172] (0xc000b13ad0) Data frame received for 1\nI0410 21:55:38.875877 3116 log.go:172] (0xc000a9a640) (1) Data frame handling\nI0410 21:55:38.875907 3116 log.go:172] (0xc000a9a640) (1) Data frame sent\nI0410 21:55:38.875942 3116 log.go:172] (0xc000b13ad0) (0xc000a9a640) Stream removed, broadcasting: 1\nI0410 21:55:38.875985 3116 log.go:172] (0xc000b13ad0) Go away received\nI0410 21:55:38.876467 3116 log.go:172] (0xc000b13ad0) (0xc000a9a640) Stream removed, broadcasting: 1\nI0410 21:55:38.876495 3116 log.go:172] (0xc000b13ad0) (0xc0006f4780) Stream removed, broadcasting: 3\nI0410 21:55:38.876510 3116 log.go:172] (0xc000b13ad0) (0xc000595540) Stream removed, broadcasting: 5\n" Apr 10 21:55:38.881: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:55:38.881: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Apr 10 21:55:58.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9572 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 10 21:55:59.141: INFO: stderr: "I0410 21:55:59.036161 3135 log.go:172] (0xc0008eec60) (0xc0003060a0) Create stream\nI0410 21:55:59.036246 3135 log.go:172] (0xc0008eec60) (0xc0003060a0) Stream added, broadcasting: 1\nI0410 21:55:59.039429 3135 log.go:172] (0xc0008eec60) Reply frame received for 1\nI0410 21:55:59.039481 3135 log.go:172] (0xc0008eec60) (0xc0006f0aa0) Create stream\nI0410 21:55:59.039497 3135 log.go:172] (0xc0008eec60) (0xc0006f0aa0) Stream added, broadcasting: 3\nI0410 21:55:59.040415 3135 log.go:172] (0xc0008eec60) Reply frame received for 3\nI0410 21:55:59.040442 3135 log.go:172] (0xc0008eec60) (0xc0007661e0) Create stream\nI0410 21:55:59.040449 3135 log.go:172] (0xc0008eec60) (0xc0007661e0) Stream added, broadcasting: 5\nI0410 21:55:59.041619 3135 log.go:172] (0xc0008eec60) Reply frame received for 5\nI0410 21:55:59.106104 3135 log.go:172] (0xc0008eec60) Data frame received for 5\nI0410 21:55:59.106135 3135 log.go:172] (0xc0007661e0) (5) Data frame handling\nI0410 21:55:59.106156 3135 log.go:172] (0xc0007661e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0410 21:55:59.134118 3135 log.go:172] (0xc0008eec60) Data frame received for 3\nI0410 21:55:59.134143 3135 log.go:172] (0xc0006f0aa0) (3) Data frame handling\nI0410 21:55:59.134163 3135 log.go:172] (0xc0006f0aa0) (3) Data frame sent\nI0410 21:55:59.134394 3135 log.go:172] (0xc0008eec60) Data frame received for 5\nI0410 21:55:59.134404 3135 log.go:172] (0xc0007661e0) (5) Data frame handling\nI0410 21:55:59.134416 3135 log.go:172] (0xc0008eec60) Data frame received for 3\nI0410 21:55:59.134420 3135 log.go:172] (0xc0006f0aa0) (3) Data frame handling\nI0410 21:55:59.136584 3135 log.go:172] (0xc0008eec60) Data frame received for 1\nI0410 21:55:59.136614 3135 log.go:172] (0xc0003060a0) (1) Data frame handling\nI0410 21:55:59.136632 3135 log.go:172] (0xc0003060a0) (1) Data frame sent\nI0410 21:55:59.136647 3135 log.go:172] (0xc0008eec60) (0xc0003060a0) Stream removed, broadcasting: 1\nI0410 21:55:59.136700 3135 log.go:172] (0xc0008eec60) Go away received\nI0410 21:55:59.137005 3135 log.go:172] (0xc0008eec60) (0xc0003060a0) Stream removed, broadcasting: 1\nI0410 21:55:59.137025 3135 log.go:172] (0xc0008eec60) (0xc0006f0aa0) Stream removed, broadcasting: 3\nI0410 21:55:59.137035 3135 log.go:172] (0xc0008eec60) (0xc0007661e0) Stream removed, broadcasting: 5\n" Apr 10 21:55:59.141: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 10 21:55:59.141: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 10 21:56:09.171: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 10 21:56:19.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9572 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 10 21:56:19.436: INFO: stderr: "I0410 21:56:19.368916 3155 log.go:172] (0xc0003db600) (0xc000b60280) Create stream\nI0410 21:56:19.368988 3155 log.go:172] (0xc0003db600) (0xc000b60280) Stream added, broadcasting: 1\nI0410 21:56:19.373034 3155 log.go:172] (0xc0003db600) Reply frame received for 1\nI0410 21:56:19.373078 3155 log.go:172] (0xc0003db600) (0xc000abe1e0) Create stream\nI0410 21:56:19.373094 3155 log.go:172] (0xc0003db600) (0xc000abe1e0) Stream added, broadcasting: 3\nI0410 21:56:19.374268 3155 log.go:172] (0xc0003db600) Reply frame received for 3\nI0410 21:56:19.374304 3155 log.go:172] (0xc0003db600) (0xc00073bcc0) Create stream\nI0410 21:56:19.374324 3155 log.go:172] (0xc0003db600) (0xc00073bcc0) Stream added, broadcasting: 5\nI0410 21:56:19.375264 3155 log.go:172] (0xc0003db600) Reply frame received for 5\nI0410 21:56:19.430913 3155 log.go:172] (0xc0003db600) Data frame received for 5\nI0410 21:56:19.430958 3155 log.go:172] (0xc00073bcc0) (5) Data frame handling\nI0410 21:56:19.430991 3155 log.go:172] (0xc00073bcc0) (5) Data frame sent\nI0410 21:56:19.431004 3155 log.go:172] (0xc0003db600) Data frame received for 5\nI0410 21:56:19.431014 3155 log.go:172] (0xc00073bcc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0410 21:56:19.431051 3155 log.go:172] (0xc0003db600) Data frame received for 3\nI0410 21:56:19.431077 3155 log.go:172] (0xc000abe1e0) (3) Data frame handling\nI0410 21:56:19.431094 3155 log.go:172] (0xc000abe1e0) (3) Data frame sent\nI0410 21:56:19.431105 3155 log.go:172] (0xc0003db600) Data frame received for 3\nI0410 21:56:19.431111 3155 log.go:172] (0xc000abe1e0) (3) Data frame handling\nI0410 21:56:19.432226 3155 log.go:172] (0xc0003db600) Data frame received for 1\nI0410 21:56:19.432255 3155 log.go:172] (0xc000b60280) (1) Data frame handling\nI0410 21:56:19.432268 3155 log.go:172] (0xc000b60280) (1) Data frame sent\nI0410 21:56:19.432279 3155 log.go:172] (0xc0003db600) (0xc000b60280) Stream removed, broadcasting: 1\nI0410 21:56:19.432292 3155 log.go:172] (0xc0003db600) Go away received\nI0410 21:56:19.432627 3155 log.go:172] (0xc0003db600) (0xc000b60280) Stream removed, broadcasting: 1\nI0410 21:56:19.432648 3155 log.go:172] (0xc0003db600) (0xc000abe1e0) Stream removed, broadcasting: 3\nI0410 21:56:19.432654 3155 log.go:172] (0xc0003db600) (0xc00073bcc0) Stream removed, broadcasting: 5\n" Apr 10 21:56:19.436: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 10 21:56:19.436: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 10 21:56:49.454: INFO: Waiting for StatefulSet statefulset-9572/ss2 to complete update Apr 10 21:56:49.454: INFO: Waiting for Pod statefulset-9572/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 10 21:56:59.461: INFO: Deleting all statefulset in ns statefulset-9572 Apr 10 21:56:59.464: INFO: Scaling statefulset ss2 to 0 Apr 10 21:57:29.498: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 21:57:29.501: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:57:29.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9572" for this suite. • [SLOW TEST:144.083 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":168,"skipped":2574,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:57:29.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 10 21:57:29.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9600' Apr 10 21:57:29.667: INFO: stderr: "" Apr 10 21:57:29.667: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 10 21:57:34.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9600 -o json' Apr 10 21:57:34.821: INFO: stderr: "" Apr 10 21:57:34.821: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-10T21:57:29Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9600\",\n \"resourceVersion\": \"7049240\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9600/pods/e2e-test-httpd-pod\",\n \"uid\": \"378e53a1-b1c9-4d8c-8bf7-d34b217b4820\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-phvc2\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-phvc2\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-phvc2\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-10T21:57:29Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-10T21:57:32Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-10T21:57:32Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-10T21:57:29Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://184f35ed3673efbe01f291f00968d997279ad50d58dd1ac4cb317207aa74dd03\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-10T21:57:32Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.139\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.139\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-10T21:57:29Z\"\n }\n}\n" STEP: replace the image in the pod Apr 10 21:57:34.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9600' Apr 10 21:57:35.243: INFO: stderr: "" Apr 10 21:57:35.243: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Apr 10 21:57:35.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9600' Apr 10 21:57:49.228: INFO: stderr: "" Apr 10 21:57:49.228: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:57:49.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9600" for this suite. • [SLOW TEST:19.707 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":169,"skipped":2584,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:57:49.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:58:03.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1858" for this suite. • [SLOW TEST:14.108 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":170,"skipped":2591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:58:03.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 10 21:58:03.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4849' Apr 10 21:58:03.884: INFO: stderr: "" Apr 10 21:58:03.884: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 21:58:03.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4849' Apr 10 21:58:03.998: INFO: stderr: "" Apr 10 21:58:03.998: INFO: stdout: "update-demo-nautilus-58nvp update-demo-nautilus-snlmv " Apr 10 21:58:03.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58nvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:04.125: INFO: stderr: "" Apr 10 21:58:04.125: INFO: stdout: "" Apr 10 21:58:04.125: INFO: update-demo-nautilus-58nvp is created but not running Apr 10 21:58:09.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4849' Apr 10 21:58:09.216: INFO: stderr: "" Apr 10 21:58:09.216: INFO: stdout: "update-demo-nautilus-58nvp update-demo-nautilus-snlmv " Apr 10 21:58:09.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58nvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:09.301: INFO: stderr: "" Apr 10 21:58:09.301: INFO: stdout: "true" Apr 10 21:58:09.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58nvp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:09.391: INFO: stderr: "" Apr 10 21:58:09.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 21:58:09.391: INFO: validating pod update-demo-nautilus-58nvp Apr 10 21:58:09.395: INFO: got data: { "image": "nautilus.jpg" } Apr 10 21:58:09.395: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 21:58:09.395: INFO: update-demo-nautilus-58nvp is verified up and running Apr 10 21:58:09.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-snlmv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:09.491: INFO: stderr: "" Apr 10 21:58:09.491: INFO: stdout: "true" Apr 10 21:58:09.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-snlmv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:09.596: INFO: stderr: "" Apr 10 21:58:09.596: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 21:58:09.596: INFO: validating pod update-demo-nautilus-snlmv Apr 10 21:58:09.600: INFO: got data: { "image": "nautilus.jpg" } Apr 10 21:58:09.600: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 21:58:09.600: INFO: update-demo-nautilus-snlmv is verified up and running STEP: rolling-update to new replication controller Apr 10 21:58:09.603: INFO: scanned /root for discovery docs: Apr 10 21:58:09.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4849' Apr 10 21:58:32.155: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 10 21:58:32.155: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 21:58:32.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4849' Apr 10 21:58:32.250: INFO: stderr: "" Apr 10 21:58:32.250: INFO: stdout: "update-demo-kitten-tkd9k update-demo-kitten-xwnzj " Apr 10 21:58:32.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tkd9k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:32.351: INFO: stderr: "" Apr 10 21:58:32.351: INFO: stdout: "true" Apr 10 21:58:32.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tkd9k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:32.444: INFO: stderr: "" Apr 10 21:58:32.444: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 10 21:58:32.444: INFO: validating pod update-demo-kitten-tkd9k Apr 10 21:58:32.451: INFO: got data: { "image": "kitten.jpg" } Apr 10 21:58:32.451: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 10 21:58:32.451: INFO: update-demo-kitten-tkd9k is verified up and running Apr 10 21:58:32.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xwnzj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:32.534: INFO: stderr: "" Apr 10 21:58:32.534: INFO: stdout: "true" Apr 10 21:58:32.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xwnzj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4849' Apr 10 21:58:32.627: INFO: stderr: "" Apr 10 21:58:32.627: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 10 21:58:32.627: INFO: validating pod update-demo-kitten-xwnzj Apr 10 21:58:32.632: INFO: got data: { "image": "kitten.jpg" } Apr 10 21:58:32.632: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 10 21:58:32.632: INFO: update-demo-kitten-xwnzj is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:58:32.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4849" for this suite. • [SLOW TEST:29.296 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":171,"skipped":2623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:58:32.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 21:58:32.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f2bd2c1-6439-41d5-843e-247ccbe73d7b" in namespace "downward-api-682" to be "success or failure" Apr 10 21:58:32.728: INFO: Pod "downwardapi-volume-8f2bd2c1-6439-41d5-843e-247ccbe73d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.611163ms Apr 10 21:58:34.731: INFO: Pod "downwardapi-volume-8f2bd2c1-6439-41d5-843e-247ccbe73d7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017986418s Apr 10 21:58:36.736: INFO: Pod "downwardapi-volume-8f2bd2c1-6439-41d5-843e-247ccbe73d7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022364754s STEP: Saw pod success Apr 10 21:58:36.736: INFO: Pod "downwardapi-volume-8f2bd2c1-6439-41d5-843e-247ccbe73d7b" satisfied condition "success or failure" Apr 10 21:58:36.739: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8f2bd2c1-6439-41d5-843e-247ccbe73d7b container client-container: STEP: delete the pod Apr 10 21:58:36.782: INFO: Waiting for pod downwardapi-volume-8f2bd2c1-6439-41d5-843e-247ccbe73d7b to disappear Apr 10 21:58:36.795: INFO: Pod downwardapi-volume-8f2bd2c1-6439-41d5-843e-247ccbe73d7b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:58:36.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-682" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2648,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:58:36.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2290.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2290.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2290.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.38.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.38.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.38.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.38.100_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2290.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2290.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2290.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2290.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.38.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.38.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.38.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.38.100_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 21:58:43.006: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:43.010: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:43.013: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:43.016: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:43.037: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:43.041: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:43.044: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:43.047: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:43.064: INFO: Lookups using dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4 failed for: [wheezy_udp@dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_udp@dns-test-service.dns-2290.svc.cluster.local jessie_tcp@dns-test-service.dns-2290.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local] Apr 10 21:58:48.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:48.073: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:48.076: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:48.079: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:48.102: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:48.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:48.108: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:48.112: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:48.132: INFO: Lookups using dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4 failed for: [wheezy_udp@dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_udp@dns-test-service.dns-2290.svc.cluster.local jessie_tcp@dns-test-service.dns-2290.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local] Apr 10 21:58:53.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:53.073: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:53.076: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:53.080: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:53.104: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:53.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:53.110: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:53.114: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:53.133: INFO: Lookups using dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4 failed for: [wheezy_udp@dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_udp@dns-test-service.dns-2290.svc.cluster.local jessie_tcp@dns-test-service.dns-2290.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local] Apr 10 21:58:58.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:58.073: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:58.076: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:58.080: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:58.102: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:58.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:58.108: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:58.111: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:58:58.131: INFO: Lookups using dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4 failed for: [wheezy_udp@dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_udp@dns-test-service.dns-2290.svc.cluster.local jessie_tcp@dns-test-service.dns-2290.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local] Apr 10 21:59:03.075: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:03.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:03.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:03.086: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:03.107: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:03.109: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:03.129: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:03.135: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:03.168: INFO: Lookups using dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4 failed for: [wheezy_udp@dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_udp@dns-test-service.dns-2290.svc.cluster.local jessie_tcp@dns-test-service.dns-2290.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local] Apr 10 21:59:08.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:08.073: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:08.076: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:08.080: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:08.101: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:08.104: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:08.106: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:08.109: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local from pod dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4: the server could not find the requested resource (get pods dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4) Apr 10 21:59:08.124: INFO: Lookups using dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4 failed for: [wheezy_udp@dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@dns-test-service.dns-2290.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_udp@dns-test-service.dns-2290.svc.cluster.local jessie_tcp@dns-test-service.dns-2290.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc.cluster.local] Apr 10 21:59:13.125: INFO: DNS probes using dns-2290/dns-test-530972fd-f1fd-48e8-b75d-d0a9d79637f4 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:13.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2290" for this suite. • [SLOW TEST:36.837 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":173,"skipped":2663,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:13.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:59:13.715: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:17.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4748" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2672,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:17.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:17.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6937" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":175,"skipped":2712,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:17.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-6a5d401a-bc67-42a4-9fe2-b194bf105a0a STEP: Creating a pod to test consume configMaps Apr 10 21:59:17.980: INFO: Waiting up to 5m0s for pod "pod-configmaps-0fe4dff5-a3b2-4c82-abc5-c7c1fc006cb8" in namespace "configmap-8404" to be "success or failure" Apr 10 21:59:17.996: INFO: Pod "pod-configmaps-0fe4dff5-a3b2-4c82-abc5-c7c1fc006cb8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.178396ms Apr 10 21:59:20.000: INFO: Pod "pod-configmaps-0fe4dff5-a3b2-4c82-abc5-c7c1fc006cb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019678657s Apr 10 21:59:22.004: INFO: Pod "pod-configmaps-0fe4dff5-a3b2-4c82-abc5-c7c1fc006cb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024077283s STEP: Saw pod success Apr 10 21:59:22.004: INFO: Pod "pod-configmaps-0fe4dff5-a3b2-4c82-abc5-c7c1fc006cb8" satisfied condition "success or failure" Apr 10 21:59:22.007: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-0fe4dff5-a3b2-4c82-abc5-c7c1fc006cb8 container configmap-volume-test: STEP: delete the pod Apr 10 21:59:22.032: INFO: Waiting for pod pod-configmaps-0fe4dff5-a3b2-4c82-abc5-c7c1fc006cb8 to disappear Apr 10 21:59:22.037: INFO: Pod pod-configmaps-0fe4dff5-a3b2-4c82-abc5-c7c1fc006cb8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:22.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8404" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2714,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:22.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 10 21:59:22.196: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-resource-version ad1ce035-5014-4bc8-976b-022fcadbe359 7050012 0 2020-04-10 21:59:22 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 21:59:22.197: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9026 /api/v1/namespaces/watch-9026/configmaps/e2e-watch-test-resource-version ad1ce035-5014-4bc8-976b-022fcadbe359 7050013 0 2020-04-10 21:59:22 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:22.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9026" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":177,"skipped":2721,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:22.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 10 21:59:22.307: INFO: Waiting up to 5m0s for pod "pod-460531fd-64e5-4445-89c0-6390cf290396" in namespace "emptydir-8225" to be "success or failure" Apr 10 21:59:22.318: INFO: Pod "pod-460531fd-64e5-4445-89c0-6390cf290396": Phase="Pending", Reason="", readiness=false. Elapsed: 10.789434ms Apr 10 21:59:24.322: INFO: Pod "pod-460531fd-64e5-4445-89c0-6390cf290396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014767799s Apr 10 21:59:26.326: INFO: Pod "pod-460531fd-64e5-4445-89c0-6390cf290396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019121516s STEP: Saw pod success Apr 10 21:59:26.326: INFO: Pod "pod-460531fd-64e5-4445-89c0-6390cf290396" satisfied condition "success or failure" Apr 10 21:59:26.329: INFO: Trying to get logs from node jerma-worker2 pod pod-460531fd-64e5-4445-89c0-6390cf290396 container test-container: STEP: delete the pod Apr 10 21:59:26.364: INFO: Waiting for pod pod-460531fd-64e5-4445-89c0-6390cf290396 to disappear Apr 10 21:59:26.378: INFO: Pod pod-460531fd-64e5-4445-89c0-6390cf290396 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:26.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8225" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:26.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:59:26.482: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 10 21:59:26.492: INFO: Number of nodes with available pods: 0 Apr 10 21:59:26.492: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 10 21:59:26.610: INFO: Number of nodes with available pods: 0 Apr 10 21:59:26.610: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:27.614: INFO: Number of nodes with available pods: 0 Apr 10 21:59:27.614: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:28.613: INFO: Number of nodes with available pods: 0 Apr 10 21:59:28.613: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:29.625: INFO: Number of nodes with available pods: 1 Apr 10 21:59:29.625: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 10 21:59:29.656: INFO: Number of nodes with available pods: 1 Apr 10 21:59:29.656: INFO: Number of running nodes: 0, number of available pods: 1 Apr 10 21:59:30.660: INFO: Number of nodes with available pods: 0 Apr 10 21:59:30.660: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 10 21:59:30.675: INFO: Number of nodes with available pods: 0 Apr 10 21:59:30.675: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:31.678: INFO: Number of nodes with available pods: 0 Apr 10 21:59:31.678: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:32.680: INFO: Number of nodes with available pods: 0 Apr 10 21:59:32.680: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:33.677: INFO: Number of nodes with available pods: 0 Apr 10 21:59:33.677: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:34.679: INFO: Number of nodes with available pods: 0 Apr 10 21:59:34.679: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:35.679: INFO: Number of nodes with available pods: 0 Apr 10 21:59:35.679: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:36.691: INFO: Number of nodes with available pods: 0 Apr 10 21:59:36.691: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:37.679: INFO: Number of nodes with available pods: 0 Apr 10 21:59:37.679: INFO: Node jerma-worker2 is running more than one daemon pod Apr 10 21:59:38.679: INFO: Number of nodes with available pods: 1 Apr 10 21:59:38.679: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9275, will wait for the garbage collector to delete the pods Apr 10 21:59:38.743: INFO: Deleting DaemonSet.extensions daemon-set took: 6.428369ms Apr 10 21:59:39.043: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.214739ms Apr 10 21:59:41.746: INFO: Number of nodes with available pods: 0 Apr 10 21:59:41.746: INFO: Number of running nodes: 0, number of available pods: 0 Apr 10 21:59:41.748: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9275/daemonsets","resourceVersion":"7050181"},"items":null} Apr 10 21:59:41.750: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9275/pods","resourceVersion":"7050181"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:41.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9275" for this suite. • [SLOW TEST:15.426 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":179,"skipped":2756,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:41.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-8049b8b1-9e46-4cfe-8134-fab589da3c45 STEP: Creating a pod to test consume configMaps Apr 10 21:59:41.865: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81e366fe-77bb-4008-b24d-822df8b15fc6" in namespace "projected-6573" to be "success or failure" Apr 10 21:59:41.888: INFO: Pod "pod-projected-configmaps-81e366fe-77bb-4008-b24d-822df8b15fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.124794ms Apr 10 21:59:43.893: INFO: Pod "pod-projected-configmaps-81e366fe-77bb-4008-b24d-822df8b15fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027529866s Apr 10 21:59:45.897: INFO: Pod "pod-projected-configmaps-81e366fe-77bb-4008-b24d-822df8b15fc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032035741s STEP: Saw pod success Apr 10 21:59:45.897: INFO: Pod "pod-projected-configmaps-81e366fe-77bb-4008-b24d-822df8b15fc6" satisfied condition "success or failure" Apr 10 21:59:45.900: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-81e366fe-77bb-4008-b24d-822df8b15fc6 container projected-configmap-volume-test: STEP: delete the pod Apr 10 21:59:45.933: INFO: Waiting for pod pod-projected-configmaps-81e366fe-77bb-4008-b24d-822df8b15fc6 to disappear Apr 10 21:59:45.961: INFO: Pod pod-projected-configmaps-81e366fe-77bb-4008-b24d-822df8b15fc6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:45.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6573" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2769,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:45.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 21:59:46.032: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.790308ms) Apr 10 21:59:46.034: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.604728ms) Apr 10 21:59:46.036: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.154013ms) Apr 10 21:59:46.039: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.568925ms) Apr 10 21:59:46.042: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.000492ms) Apr 10 21:59:46.045: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.708168ms) Apr 10 21:59:46.047: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.661048ms) Apr 10 21:59:46.050: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.839373ms) Apr 10 21:59:46.053: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.058469ms) Apr 10 21:59:46.056: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.054889ms) Apr 10 21:59:46.060: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.458629ms) Apr 10 21:59:46.087: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 27.500046ms) Apr 10 21:59:46.092: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.997469ms) Apr 10 21:59:46.095: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.754497ms) Apr 10 21:59:46.098: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.092756ms) Apr 10 21:59:46.101: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.707196ms) Apr 10 21:59:46.104: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.453929ms) Apr 10 21:59:46.107: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.699798ms) Apr 10 21:59:46.110: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.852849ms) Apr 10 21:59:46.113: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.073912ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:46.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3942" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":181,"skipped":2778,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:46.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-fc0ba088-2807-494e-acbe-379c19e664ad STEP: Creating configMap with name cm-test-opt-upd-862b0df4-d7a5-4ab0-8abb-f2f59fbd93f8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fc0ba088-2807-494e-acbe-379c19e664ad STEP: Updating configmap cm-test-opt-upd-862b0df4-d7a5-4ab0-8abb-f2f59fbd93f8 STEP: Creating configMap with name cm-test-opt-create-4d143f47-d75d-430b-beba-f3727fe6cd87 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 21:59:56.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9463" for this suite. • [SLOW TEST:10.208 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2783,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 21:59:56.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9260 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9260 STEP: creating replication controller externalsvc in namespace services-9260 I0410 21:59:56.531402 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9260, replica count: 2 I0410 21:59:59.581813 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 22:00:02.582003 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 10 22:00:02.622: INFO: Creating new exec pod Apr 10 22:00:06.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9260 execpodbskch -- /bin/sh -x -c nslookup clusterip-service' Apr 10 22:00:06.897: INFO: stderr: "I0410 22:00:06.790701 3564 log.go:172] (0xc000a096b0) (0xc000a108c0) Create stream\nI0410 22:00:06.790779 3564 log.go:172] (0xc000a096b0) (0xc000a108c0) Stream added, broadcasting: 1\nI0410 22:00:06.795792 3564 log.go:172] (0xc000a096b0) Reply frame received for 1\nI0410 22:00:06.795828 3564 log.go:172] (0xc000a096b0) (0xc000686820) Create stream\nI0410 22:00:06.795838 3564 log.go:172] (0xc000a096b0) (0xc000686820) Stream added, broadcasting: 3\nI0410 22:00:06.796968 3564 log.go:172] (0xc000a096b0) Reply frame received for 3\nI0410 22:00:06.797018 3564 log.go:172] (0xc000a096b0) (0xc00075b5e0) Create stream\nI0410 22:00:06.797034 3564 log.go:172] (0xc000a096b0) (0xc00075b5e0) Stream added, broadcasting: 5\nI0410 22:00:06.798405 3564 log.go:172] (0xc000a096b0) Reply frame received for 5\nI0410 22:00:06.880446 3564 log.go:172] (0xc000a096b0) Data frame received for 5\nI0410 22:00:06.880481 3564 log.go:172] (0xc00075b5e0) (5) Data frame handling\nI0410 22:00:06.880506 3564 log.go:172] (0xc00075b5e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0410 22:00:06.888608 3564 log.go:172] (0xc000a096b0) Data frame received for 3\nI0410 22:00:06.888632 3564 log.go:172] (0xc000686820) (3) Data frame handling\nI0410 22:00:06.888650 3564 log.go:172] (0xc000686820) (3) Data frame sent\nI0410 22:00:06.889582 3564 log.go:172] (0xc000a096b0) Data frame received for 3\nI0410 22:00:06.889615 3564 log.go:172] (0xc000686820) (3) Data frame handling\nI0410 22:00:06.889658 3564 log.go:172] (0xc000686820) (3) Data frame sent\nI0410 22:00:06.890038 3564 log.go:172] (0xc000a096b0) Data frame received for 5\nI0410 22:00:06.890071 3564 log.go:172] (0xc00075b5e0) (5) Data frame handling\nI0410 22:00:06.890163 3564 log.go:172] (0xc000a096b0) Data frame received for 3\nI0410 22:00:06.890180 3564 log.go:172] (0xc000686820) (3) Data frame handling\nI0410 22:00:06.892083 3564 log.go:172] (0xc000a096b0) Data frame received for 1\nI0410 22:00:06.892101 3564 log.go:172] (0xc000a108c0) (1) Data frame handling\nI0410 22:00:06.892116 3564 log.go:172] (0xc000a108c0) (1) Data frame sent\nI0410 22:00:06.892127 3564 log.go:172] (0xc000a096b0) (0xc000a108c0) Stream removed, broadcasting: 1\nI0410 22:00:06.892198 3564 log.go:172] (0xc000a096b0) Go away received\nI0410 22:00:06.892640 3564 log.go:172] (0xc000a096b0) (0xc000a108c0) Stream removed, broadcasting: 1\nI0410 22:00:06.892661 3564 log.go:172] (0xc000a096b0) (0xc000686820) Stream removed, broadcasting: 3\nI0410 22:00:06.892673 3564 log.go:172] (0xc000a096b0) (0xc00075b5e0) Stream removed, broadcasting: 5\n" Apr 10 22:00:06.897: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9260.svc.cluster.local\tcanonical name = externalsvc.services-9260.svc.cluster.local.\nName:\texternalsvc.services-9260.svc.cluster.local\nAddress: 10.101.13.49\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9260, will wait for the garbage collector to delete the pods Apr 10 22:00:06.956: INFO: Deleting ReplicationController externalsvc took: 5.964912ms Apr 10 22:00:07.357: INFO: Terminating ReplicationController externalsvc pods took: 400.244515ms Apr 10 22:00:11.907: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:00:11.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9260" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.603 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":183,"skipped":2790,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:00:11.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-5b9a743c-2a5d-439a-a698-89204ce180b3 STEP: Creating secret with name s-test-opt-upd-5ccd2e4a-951a-47d0-b775-2f131d281ef4 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5b9a743c-2a5d-439a-a698-89204ce180b3 STEP: Updating secret s-test-opt-upd-5ccd2e4a-951a-47d0-b775-2f131d281ef4 STEP: Creating secret with name s-test-opt-create-bd6f733b-04c8-430f-83f4-c5df4c1fdd59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:01:46.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2421" for this suite. • [SLOW TEST:94.730 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:01:46.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 10 22:01:46.718: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-a 8ad8c98d-0720-4e84-916f-fda1a5a30bae 7050760 0 2020-04-10 22:01:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 22:01:46.718: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-a 8ad8c98d-0720-4e84-916f-fda1a5a30bae 7050760 0 2020-04-10 22:01:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 10 22:01:56.730: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-a 8ad8c98d-0720-4e84-916f-fda1a5a30bae 7050808 0 2020-04-10 22:01:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 10 22:01:56.730: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-a 8ad8c98d-0720-4e84-916f-fda1a5a30bae 7050808 0 2020-04-10 22:01:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 10 22:02:06.740: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-a 8ad8c98d-0720-4e84-916f-fda1a5a30bae 7050840 0 2020-04-10 22:01:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 22:02:06.740: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-a 8ad8c98d-0720-4e84-916f-fda1a5a30bae 7050840 0 2020-04-10 22:01:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 10 22:02:16.747: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-a 8ad8c98d-0720-4e84-916f-fda1a5a30bae 7050872 0 2020-04-10 22:01:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 22:02:16.747: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-a 8ad8c98d-0720-4e84-916f-fda1a5a30bae 7050872 0 2020-04-10 22:01:46 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 10 22:02:26.754: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-b cb7a2bcc-f781-4abd-bae6-54f08c4a6013 7050902 0 2020-04-10 22:02:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 22:02:26.754: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-b cb7a2bcc-f781-4abd-bae6-54f08c4a6013 7050902 0 2020-04-10 22:02:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 10 22:02:36.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-b cb7a2bcc-f781-4abd-bae6-54f08c4a6013 7050932 0 2020-04-10 22:02:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 22:02:36.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2961 /api/v1/namespaces/watch-2961/configmaps/e2e-watch-test-configmap-b cb7a2bcc-f781-4abd-bae6-54f08c4a6013 7050932 0 2020-04-10 22:02:26 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:02:46.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2961" for this suite. • [SLOW TEST:60.108 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":185,"skipped":2828,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:02:46.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-fe34b67b-19c0-4444-a309-51d98fa01d50 STEP: Creating a pod to test consume configMaps Apr 10 22:02:46.870: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c03df8c-4c5e-46d0-8336-6e35fab0f9b1" in namespace "projected-2601" to be "success or failure" Apr 10 22:02:46.887: INFO: Pod "pod-projected-configmaps-5c03df8c-4c5e-46d0-8336-6e35fab0f9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.603118ms Apr 10 22:02:48.891: INFO: Pod "pod-projected-configmaps-5c03df8c-4c5e-46d0-8336-6e35fab0f9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020197542s Apr 10 22:02:50.895: INFO: Pod "pod-projected-configmaps-5c03df8c-4c5e-46d0-8336-6e35fab0f9b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024600679s STEP: Saw pod success Apr 10 22:02:50.895: INFO: Pod "pod-projected-configmaps-5c03df8c-4c5e-46d0-8336-6e35fab0f9b1" satisfied condition "success or failure" Apr 10 22:02:50.898: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5c03df8c-4c5e-46d0-8336-6e35fab0f9b1 container projected-configmap-volume-test: STEP: delete the pod Apr 10 22:02:50.956: INFO: Waiting for pod pod-projected-configmaps-5c03df8c-4c5e-46d0-8336-6e35fab0f9b1 to disappear Apr 10 22:02:50.970: INFO: Pod pod-projected-configmaps-5c03df8c-4c5e-46d0-8336-6e35fab0f9b1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:02:50.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2601" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2828,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:02:50.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 10 22:02:51.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3328' Apr 10 22:02:51.192: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 22:02:51.192: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Apr 10 22:02:53.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3328' Apr 10 22:02:53.389: INFO: stderr: "" Apr 10 22:02:53.389: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:02:53.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3328" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":187,"skipped":2831,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:02:53.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 10 22:02:53.511: INFO: Waiting up to 5m0s for pod "pod-98edc512-8bbf-481e-9fc6-89c07878b301" in namespace "emptydir-5836" to be "success or failure" Apr 10 22:02:53.515: INFO: Pod "pod-98edc512-8bbf-481e-9fc6-89c07878b301": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174475ms Apr 10 22:02:55.519: INFO: Pod "pod-98edc512-8bbf-481e-9fc6-89c07878b301": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008110565s Apr 10 22:02:57.523: INFO: Pod "pod-98edc512-8bbf-481e-9fc6-89c07878b301": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012065601s STEP: Saw pod success Apr 10 22:02:57.523: INFO: Pod "pod-98edc512-8bbf-481e-9fc6-89c07878b301" satisfied condition "success or failure" Apr 10 22:02:57.526: INFO: Trying to get logs from node jerma-worker pod pod-98edc512-8bbf-481e-9fc6-89c07878b301 container test-container: STEP: delete the pod Apr 10 22:02:57.543: INFO: Waiting for pod pod-98edc512-8bbf-481e-9fc6-89c07878b301 to disappear Apr 10 22:02:57.546: INFO: Pod pod-98edc512-8bbf-481e-9fc6-89c07878b301 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:02:57.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5836" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2833,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:02:57.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 22:02:57.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fb383d4-7dda-466b-959e-0c7ff6112ef7" in namespace "downward-api-3015" to be "success or failure" Apr 10 22:02:57.671: INFO: Pod "downwardapi-volume-8fb383d4-7dda-466b-959e-0c7ff6112ef7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.607978ms Apr 10 22:02:59.676: INFO: Pod "downwardapi-volume-8fb383d4-7dda-466b-959e-0c7ff6112ef7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025986513s Apr 10 22:03:01.680: INFO: Pod "downwardapi-volume-8fb383d4-7dda-466b-959e-0c7ff6112ef7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029832773s STEP: Saw pod success Apr 10 22:03:01.680: INFO: Pod "downwardapi-volume-8fb383d4-7dda-466b-959e-0c7ff6112ef7" satisfied condition "success or failure" Apr 10 22:03:01.683: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8fb383d4-7dda-466b-959e-0c7ff6112ef7 container client-container: STEP: delete the pod Apr 10 22:03:01.703: INFO: Waiting for pod downwardapi-volume-8fb383d4-7dda-466b-959e-0c7ff6112ef7 to disappear Apr 10 22:03:01.707: INFO: Pod downwardapi-volume-8fb383d4-7dda-466b-959e-0c7ff6112ef7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:03:01.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3015" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2889,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:03:01.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-56f03ffc-3ed8-42dc-9721-907b9b527b9b STEP: Creating a pod to test consume secrets Apr 10 22:03:01.804: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17badabe-81a4-4a9c-9de5-f20c81cf6277" in namespace "projected-6290" to be "success or failure" Apr 10 22:03:01.815: INFO: Pod "pod-projected-secrets-17badabe-81a4-4a9c-9de5-f20c81cf6277": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116772ms Apr 10 22:03:03.818: INFO: Pod "pod-projected-secrets-17badabe-81a4-4a9c-9de5-f20c81cf6277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013749069s Apr 10 22:03:05.822: INFO: Pod "pod-projected-secrets-17badabe-81a4-4a9c-9de5-f20c81cf6277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017742336s STEP: Saw pod success Apr 10 22:03:05.822: INFO: Pod "pod-projected-secrets-17badabe-81a4-4a9c-9de5-f20c81cf6277" satisfied condition "success or failure" Apr 10 22:03:05.825: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-17badabe-81a4-4a9c-9de5-f20c81cf6277 container projected-secret-volume-test: STEP: delete the pod Apr 10 22:03:05.851: INFO: Waiting for pod pod-projected-secrets-17badabe-81a4-4a9c-9de5-f20c81cf6277 to disappear Apr 10 22:03:05.857: INFO: Pod pod-projected-secrets-17badabe-81a4-4a9c-9de5-f20c81cf6277 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:03:05.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6290" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:03:05.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-e19f0b6b-63e6-4a62-829b-9d7ac820b2a1 STEP: Creating a pod to test consume secrets Apr 10 22:03:05.931: INFO: Waiting up to 5m0s for pod "pod-secrets-a4c09ec9-fc54-4caa-8c1c-838cc6423f9f" in namespace "secrets-5844" to be "success or failure" Apr 10 22:03:05.964: INFO: Pod "pod-secrets-a4c09ec9-fc54-4caa-8c1c-838cc6423f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.819181ms Apr 10 22:03:07.967: INFO: Pod "pod-secrets-a4c09ec9-fc54-4caa-8c1c-838cc6423f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036324377s Apr 10 22:03:09.971: INFO: Pod "pod-secrets-a4c09ec9-fc54-4caa-8c1c-838cc6423f9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040380513s STEP: Saw pod success Apr 10 22:03:09.971: INFO: Pod "pod-secrets-a4c09ec9-fc54-4caa-8c1c-838cc6423f9f" satisfied condition "success or failure" Apr 10 22:03:09.974: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a4c09ec9-fc54-4caa-8c1c-838cc6423f9f container secret-volume-test: STEP: delete the pod Apr 10 22:03:10.020: INFO: Waiting for pod pod-secrets-a4c09ec9-fc54-4caa-8c1c-838cc6423f9f to disappear Apr 10 22:03:10.031: INFO: Pod pod-secrets-a4c09ec9-fc54-4caa-8c1c-838cc6423f9f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:03:10.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5844" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":2979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:03:10.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 10 22:03:10.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2662' Apr 10 22:03:10.407: INFO: stderr: "" Apr 10 22:03:10.407: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 22:03:10.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2662' Apr 10 22:03:10.515: INFO: stderr: "" Apr 10 22:03:10.515: INFO: stdout: "update-demo-nautilus-hk8x6 update-demo-nautilus-tgvlj " Apr 10 22:03:10.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:10.606: INFO: stderr: "" Apr 10 22:03:10.606: INFO: stdout: "" Apr 10 22:03:10.606: INFO: update-demo-nautilus-hk8x6 is created but not running Apr 10 22:03:15.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2662' Apr 10 22:03:15.702: INFO: stderr: "" Apr 10 22:03:15.702: INFO: stdout: "update-demo-nautilus-hk8x6 update-demo-nautilus-tgvlj " Apr 10 22:03:15.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:15.805: INFO: stderr: "" Apr 10 22:03:15.805: INFO: stdout: "true" Apr 10 22:03:15.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:15.902: INFO: stderr: "" Apr 10 22:03:15.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 22:03:15.902: INFO: validating pod update-demo-nautilus-hk8x6 Apr 10 22:03:15.906: INFO: got data: { "image": "nautilus.jpg" } Apr 10 22:03:15.906: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 22:03:15.906: INFO: update-demo-nautilus-hk8x6 is verified up and running Apr 10 22:03:15.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgvlj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:15.999: INFO: stderr: "" Apr 10 22:03:15.999: INFO: stdout: "true" Apr 10 22:03:15.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgvlj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:16.100: INFO: stderr: "" Apr 10 22:03:16.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 22:03:16.101: INFO: validating pod update-demo-nautilus-tgvlj Apr 10 22:03:16.105: INFO: got data: { "image": "nautilus.jpg" } Apr 10 22:03:16.105: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 22:03:16.105: INFO: update-demo-nautilus-tgvlj is verified up and running STEP: scaling down the replication controller Apr 10 22:03:16.108: INFO: scanned /root for discovery docs: Apr 10 22:03:16.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2662' Apr 10 22:03:17.231: INFO: stderr: "" Apr 10 22:03:17.231: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 22:03:17.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2662' Apr 10 22:03:17.329: INFO: stderr: "" Apr 10 22:03:17.329: INFO: stdout: "update-demo-nautilus-hk8x6 update-demo-nautilus-tgvlj " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 10 22:03:22.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2662' Apr 10 22:03:22.444: INFO: stderr: "" Apr 10 22:03:22.444: INFO: stdout: "update-demo-nautilus-hk8x6 update-demo-nautilus-tgvlj " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 10 22:03:27.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2662' Apr 10 22:03:27.548: INFO: stderr: "" Apr 10 22:03:27.548: INFO: stdout: "update-demo-nautilus-hk8x6 update-demo-nautilus-tgvlj " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 10 22:03:32.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2662' Apr 10 22:03:32.658: INFO: stderr: "" Apr 10 22:03:32.658: INFO: stdout: "update-demo-nautilus-hk8x6 " Apr 10 22:03:32.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:32.748: INFO: stderr: "" Apr 10 22:03:32.748: INFO: stdout: "true" Apr 10 22:03:32.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:32.863: INFO: stderr: "" Apr 10 22:03:32.863: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 22:03:32.863: INFO: validating pod update-demo-nautilus-hk8x6 Apr 10 22:03:32.866: INFO: got data: { "image": "nautilus.jpg" } Apr 10 22:03:32.866: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 22:03:32.866: INFO: update-demo-nautilus-hk8x6 is verified up and running STEP: scaling up the replication controller Apr 10 22:03:32.869: INFO: scanned /root for discovery docs: Apr 10 22:03:32.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2662' Apr 10 22:03:34.051: INFO: stderr: "" Apr 10 22:03:34.051: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 10 22:03:34.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2662' Apr 10 22:03:34.140: INFO: stderr: "" Apr 10 22:03:34.140: INFO: stdout: "update-demo-nautilus-hk8x6 update-demo-nautilus-n9m7w " Apr 10 22:03:34.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:34.229: INFO: stderr: "" Apr 10 22:03:34.229: INFO: stdout: "true" Apr 10 22:03:34.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:34.312: INFO: stderr: "" Apr 10 22:03:34.312: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 22:03:34.312: INFO: validating pod update-demo-nautilus-hk8x6 Apr 10 22:03:34.315: INFO: got data: { "image": "nautilus.jpg" } Apr 10 22:03:34.315: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 22:03:34.315: INFO: update-demo-nautilus-hk8x6 is verified up and running Apr 10 22:03:34.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9m7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:34.409: INFO: stderr: "" Apr 10 22:03:34.409: INFO: stdout: "" Apr 10 22:03:34.409: INFO: update-demo-nautilus-n9m7w is created but not running Apr 10 22:03:39.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2662' Apr 10 22:03:39.514: INFO: stderr: "" Apr 10 22:03:39.514: INFO: stdout: "update-demo-nautilus-hk8x6 update-demo-nautilus-n9m7w " Apr 10 22:03:39.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:39.599: INFO: stderr: "" Apr 10 22:03:39.599: INFO: stdout: "true" Apr 10 22:03:39.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk8x6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:39.706: INFO: stderr: "" Apr 10 22:03:39.706: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 22:03:39.706: INFO: validating pod update-demo-nautilus-hk8x6 Apr 10 22:03:39.711: INFO: got data: { "image": "nautilus.jpg" } Apr 10 22:03:39.711: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 22:03:39.712: INFO: update-demo-nautilus-hk8x6 is verified up and running Apr 10 22:03:39.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9m7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:39.804: INFO: stderr: "" Apr 10 22:03:39.804: INFO: stdout: "true" Apr 10 22:03:39.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9m7w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2662' Apr 10 22:03:39.903: INFO: stderr: "" Apr 10 22:03:39.903: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 10 22:03:39.903: INFO: validating pod update-demo-nautilus-n9m7w Apr 10 22:03:39.908: INFO: got data: { "image": "nautilus.jpg" } Apr 10 22:03:39.908: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 10 22:03:39.908: INFO: update-demo-nautilus-n9m7w is verified up and running STEP: using delete to clean up resources Apr 10 22:03:39.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2662' Apr 10 22:03:40.017: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 22:03:40.017: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 10 22:03:40.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2662' Apr 10 22:03:40.152: INFO: stderr: "No resources found in kubectl-2662 namespace.\n" Apr 10 22:03:40.152: INFO: stdout: "" Apr 10 22:03:40.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2662 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 22:03:40.260: INFO: stderr: "" Apr 10 22:03:40.260: INFO: stdout: "update-demo-nautilus-hk8x6\nupdate-demo-nautilus-n9m7w\n" Apr 10 22:03:40.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2662' Apr 10 22:03:40.868: INFO: stderr: "No resources found in kubectl-2662 namespace.\n" Apr 10 22:03:40.868: INFO: stdout: "" Apr 10 22:03:40.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2662 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 10 22:03:40.965: INFO: stderr: "" Apr 10 22:03:40.965: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:03:40.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2662" for this suite. • [SLOW TEST:31.027 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":192,"skipped":3011,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:03:41.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-01e37ed2-995d-4350-87cb-cc20f3f9c410 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:03:45.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-480" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:03:45.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:03:45.241: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.398396ms) Apr 10 22:03:45.244: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.712483ms) Apr 10 22:03:45.261: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 16.965597ms) Apr 10 22:03:45.294: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 33.0148ms) Apr 10 22:03:45.298: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.540302ms) Apr 10 22:03:45.301: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.446064ms) Apr 10 22:03:45.304: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.723409ms) Apr 10 22:03:45.307: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.340589ms) Apr 10 22:03:45.311: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.444874ms) Apr 10 22:03:45.314: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.335697ms) Apr 10 22:03:45.320: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.903125ms) Apr 10 22:03:45.332: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 11.421009ms) Apr 10 22:03:45.335: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.201848ms) Apr 10 22:03:45.338: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.163936ms) Apr 10 22:03:45.342: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.622339ms) Apr 10 22:03:45.351: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 9.843133ms) Apr 10 22:03:45.355: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.203612ms) Apr 10 22:03:45.357: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.457218ms) Apr 10 22:03:45.359: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.022013ms) Apr 10 22:03:45.361: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.101021ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:03:45.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5785" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":194,"skipped":3053,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:03:45.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-126af742-4613-46bf-a966-315fb56c353c [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:03:45.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4793" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":195,"skipped":3074,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:03:45.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 22:03:45.914: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 22:03:47.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153025, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153025, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153025, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153025, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 22:03:50.971: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 10 22:03:51.038: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:03:51.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3288" for this suite. STEP: Destroying namespace "webhook-3288-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.745 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":196,"skipped":3080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:03:51.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2637c707-0f79-4967-b6a6-929a4c755671 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2637c707-0f79-4967-b6a6-929a4c755671 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:05:17.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-444" for this suite. • [SLOW TEST:86.764 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3117,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:05:17.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 10 22:05:18.062: INFO: Waiting up to 5m0s for pod "client-containers-cf95f84b-1696-4d3f-8292-7ca2b2d73cb3" in namespace "containers-7347" to be "success or failure" Apr 10 22:05:18.065: INFO: Pod "client-containers-cf95f84b-1696-4d3f-8292-7ca2b2d73cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.84435ms Apr 10 22:05:20.069: INFO: Pod "client-containers-cf95f84b-1696-4d3f-8292-7ca2b2d73cb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007126757s Apr 10 22:05:22.073: INFO: Pod "client-containers-cf95f84b-1696-4d3f-8292-7ca2b2d73cb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010880219s STEP: Saw pod success Apr 10 22:05:22.073: INFO: Pod "client-containers-cf95f84b-1696-4d3f-8292-7ca2b2d73cb3" satisfied condition "success or failure" Apr 10 22:05:22.076: INFO: Trying to get logs from node jerma-worker pod client-containers-cf95f84b-1696-4d3f-8292-7ca2b2d73cb3 container test-container: STEP: delete the pod Apr 10 22:05:22.107: INFO: Waiting for pod client-containers-cf95f84b-1696-4d3f-8292-7ca2b2d73cb3 to disappear Apr 10 22:05:22.112: INFO: Pod client-containers-cf95f84b-1696-4d3f-8292-7ca2b2d73cb3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:05:22.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7347" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3117,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:05:22.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 10 22:05:22.210: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:05:29.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9911" for this suite. • [SLOW TEST:7.276 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":199,"skipped":3129,"failed":0} SSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:05:29.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:05:29.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6154" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":200,"skipped":3134,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:05:29.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:05:29.640: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3e0b44aa-dbc5-4f3b-917f-2c1cac848e5b" in namespace "security-context-test-9276" to be "success or failure" Apr 10 22:05:29.680: INFO: Pod "alpine-nnp-false-3e0b44aa-dbc5-4f3b-917f-2c1cac848e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.312125ms Apr 10 22:05:31.685: INFO: Pod "alpine-nnp-false-3e0b44aa-dbc5-4f3b-917f-2c1cac848e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044670131s Apr 10 22:05:33.689: INFO: Pod "alpine-nnp-false-3e0b44aa-dbc5-4f3b-917f-2c1cac848e5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049386743s Apr 10 22:05:33.690: INFO: Pod "alpine-nnp-false-3e0b44aa-dbc5-4f3b-917f-2c1cac848e5b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:05:33.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9276" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:05:33.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-8dd42cf1-56c4-49ea-8d2d-53af884fab61 STEP: Creating a pod to test consume configMaps Apr 10 22:05:33.779: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41" in namespace "projected-7308" to be "success or failure" Apr 10 22:05:33.789: INFO: Pod "pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41": Phase="Pending", Reason="", readiness=false. Elapsed: 9.518864ms Apr 10 22:05:35.805: INFO: Pod "pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025250306s Apr 10 22:05:37.808: INFO: Pod "pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41": Phase="Running", Reason="", readiness=true. Elapsed: 4.028656748s Apr 10 22:05:39.812: INFO: Pod "pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03291852s STEP: Saw pod success Apr 10 22:05:39.812: INFO: Pod "pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41" satisfied condition "success or failure" Apr 10 22:05:39.815: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41 container projected-configmap-volume-test: STEP: delete the pod Apr 10 22:05:39.843: INFO: Waiting for pod pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41 to disappear Apr 10 22:05:39.855: INFO: Pod pod-projected-configmaps-8f314679-dece-4e3e-be09-9fc281027a41 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:05:39.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7308" for this suite. • [SLOW TEST:6.159 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3170,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:05:39.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 10 22:05:44.961: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:05:45.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9827" for this suite. • [SLOW TEST:5.213 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":203,"skipped":3183,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:05:45.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 10 22:05:45.160: INFO: namespace kubectl-7097 Apr 10 22:05:45.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7097' Apr 10 22:05:49.247: INFO: stderr: "" Apr 10 22:05:49.248: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 10 22:05:50.266: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 22:05:50.266: INFO: Found 0 / 1 Apr 10 22:05:51.296: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 22:05:51.296: INFO: Found 0 / 1 Apr 10 22:05:52.261: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 22:05:52.261: INFO: Found 0 / 1 Apr 10 22:05:53.250: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 22:05:53.250: INFO: Found 0 / 1 Apr 10 22:05:54.252: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 22:05:54.252: INFO: Found 1 / 1 Apr 10 22:05:54.252: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 10 22:05:54.256: INFO: Selector matched 1 pods for map[app:agnhost] Apr 10 22:05:54.256: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 10 22:05:54.256: INFO: wait on agnhost-master startup in kubectl-7097 Apr 10 22:05:54.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-thpnj agnhost-master --namespace=kubectl-7097' Apr 10 22:05:54.366: INFO: stderr: "" Apr 10 22:05:54.366: INFO: stdout: "Paused\n" STEP: exposing RC Apr 10 22:05:54.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7097' Apr 10 22:05:54.541: INFO: stderr: "" Apr 10 22:05:54.541: INFO: stdout: "service/rm2 exposed\n" Apr 10 22:05:54.550: INFO: Service rm2 in namespace kubectl-7097 found. STEP: exposing service Apr 10 22:05:56.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7097' Apr 10 22:05:56.709: INFO: stderr: "" Apr 10 22:05:56.709: INFO: stdout: "service/rm3 exposed\n" Apr 10 22:05:56.712: INFO: Service rm3 in namespace kubectl-7097 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:05:58.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7097" for this suite. • [SLOW TEST:13.653 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":204,"skipped":3195,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:05:58.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-811 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-811 Apr 10 22:05:58.799: INFO: Found 0 stateful pods, waiting for 1 Apr 10 22:06:08.804: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 10 22:06:08.827: INFO: Deleting all statefulset in ns statefulset-811 Apr 10 22:06:08.847: INFO: Scaling statefulset ss to 0 Apr 10 22:06:28.937: INFO: Waiting for statefulset status.replicas updated to 0 Apr 10 22:06:28.941: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:06:28.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-811" for this suite. • [SLOW TEST:30.236 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":205,"skipped":3201,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:06:28.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:06:29.042: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:06:35.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4964" for this suite. • [SLOW TEST:6.052 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":206,"skipped":3218,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:06:35.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c4840681-47af-4fc6-8b90-ecadefb1823b STEP: Creating a pod to test consume configMaps Apr 10 22:06:35.146: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f8eb270-bc49-4e7e-8ef6-f38683e0a713" in namespace "configmap-8398" to be "success or failure" Apr 10 22:06:35.150: INFO: Pod "pod-configmaps-6f8eb270-bc49-4e7e-8ef6-f38683e0a713": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488196ms Apr 10 22:06:37.154: INFO: Pod "pod-configmaps-6f8eb270-bc49-4e7e-8ef6-f38683e0a713": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008524362s Apr 10 22:06:39.158: INFO: Pod "pod-configmaps-6f8eb270-bc49-4e7e-8ef6-f38683e0a713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012715409s STEP: Saw pod success Apr 10 22:06:39.159: INFO: Pod "pod-configmaps-6f8eb270-bc49-4e7e-8ef6-f38683e0a713" satisfied condition "success or failure" Apr 10 22:06:39.162: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6f8eb270-bc49-4e7e-8ef6-f38683e0a713 container configmap-volume-test: STEP: delete the pod Apr 10 22:06:39.198: INFO: Waiting for pod pod-configmaps-6f8eb270-bc49-4e7e-8ef6-f38683e0a713 to disappear Apr 10 22:06:39.266: INFO: Pod pod-configmaps-6f8eb270-bc49-4e7e-8ef6-f38683e0a713 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:06:39.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8398" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:06:39.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 10 22:06:39.351: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-707 /api/v1/namespaces/watch-707/configmaps/e2e-watch-test-label-changed 7c37ddd4-9699-4ce7-a07b-dbca93906807 7052409 0 2020-04-10 22:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 10 22:06:39.351: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-707 /api/v1/namespaces/watch-707/configmaps/e2e-watch-test-label-changed 7c37ddd4-9699-4ce7-a07b-dbca93906807 7052410 0 2020-04-10 22:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 10 22:06:39.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-707 /api/v1/namespaces/watch-707/configmaps/e2e-watch-test-label-changed 7c37ddd4-9699-4ce7-a07b-dbca93906807 7052411 0 2020-04-10 22:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 10 22:06:49.396: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-707 /api/v1/namespaces/watch-707/configmaps/e2e-watch-test-label-changed 7c37ddd4-9699-4ce7-a07b-dbca93906807 7052457 0 2020-04-10 22:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 10 22:06:49.396: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-707 /api/v1/namespaces/watch-707/configmaps/e2e-watch-test-label-changed 7c37ddd4-9699-4ce7-a07b-dbca93906807 7052458 0 2020-04-10 22:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 10 22:06:49.396: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-707 /api/v1/namespaces/watch-707/configmaps/e2e-watch-test-label-changed 7c37ddd4-9699-4ce7-a07b-dbca93906807 7052459 0 2020-04-10 22:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:06:49.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-707" for this suite. • [SLOW TEST:10.137 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":208,"skipped":3256,"failed":0} SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:06:49.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 10 22:06:49.511: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5809" to be "success or failure" Apr 10 22:06:49.515: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261773ms Apr 10 22:06:51.542: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031164899s Apr 10 22:06:53.545: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034821026s STEP: Saw pod success Apr 10 22:06:53.546: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 10 22:06:53.548: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 10 22:06:53.687: INFO: Waiting for pod pod-host-path-test to disappear Apr 10 22:06:53.702: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:06:53.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5809" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:06:53.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7362 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 10 22:06:53.763: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 10 22:07:19.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:8080/dial?request=hostname&protocol=http&host=10.244.1.167&port=8080&tries=1'] Namespace:pod-network-test-7362 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 22:07:19.903: INFO: >>> kubeConfig: /root/.kube/config I0410 22:07:19.943648 7 log.go:172] (0xc00169d550) (0xc002379720) Create stream I0410 22:07:19.943675 7 log.go:172] (0xc00169d550) (0xc002379720) Stream added, broadcasting: 1 I0410 22:07:19.946686 7 log.go:172] (0xc00169d550) Reply frame received for 1 I0410 22:07:19.946740 7 log.go:172] (0xc00169d550) (0xc00163d2c0) Create stream I0410 22:07:19.946757 7 log.go:172] (0xc00169d550) (0xc00163d2c0) Stream added, broadcasting: 3 I0410 22:07:19.947653 7 log.go:172] (0xc00169d550) Reply frame received for 3 I0410 22:07:19.947716 7 log.go:172] (0xc00169d550) (0xc0028477c0) Create stream I0410 22:07:19.947741 7 log.go:172] (0xc00169d550) (0xc0028477c0) Stream added, broadcasting: 5 I0410 22:07:19.948543 7 log.go:172] (0xc00169d550) Reply frame received for 5 I0410 22:07:20.031513 7 log.go:172] (0xc00169d550) Data frame received for 3 I0410 22:07:20.031542 7 log.go:172] (0xc00163d2c0) (3) Data frame handling I0410 22:07:20.031560 7 log.go:172] (0xc00163d2c0) (3) Data frame sent I0410 22:07:20.032391 7 log.go:172] (0xc00169d550) Data frame received for 3 I0410 22:07:20.032432 7 log.go:172] (0xc00163d2c0) (3) Data frame handling I0410 22:07:20.032458 7 log.go:172] (0xc00169d550) Data frame received for 5 I0410 22:07:20.032486 7 log.go:172] (0xc0028477c0) (5) Data frame handling I0410 22:07:20.034409 7 log.go:172] (0xc00169d550) Data frame received for 1 I0410 22:07:20.034426 7 log.go:172] (0xc002379720) (1) Data frame handling I0410 22:07:20.034435 7 log.go:172] (0xc002379720) (1) Data frame sent I0410 22:07:20.034446 7 log.go:172] (0xc00169d550) (0xc002379720) Stream removed, broadcasting: 1 I0410 22:07:20.034462 7 log.go:172] (0xc00169d550) Go away received I0410 22:07:20.034653 7 log.go:172] (0xc00169d550) (0xc002379720) Stream removed, broadcasting: 1 I0410 22:07:20.034687 7 log.go:172] (0xc00169d550) (0xc00163d2c0) Stream removed, broadcasting: 3 I0410 22:07:20.034711 7 log.go:172] (0xc00169d550) (0xc0028477c0) Stream removed, broadcasting: 5 Apr 10 22:07:20.034: INFO: Waiting for responses: map[] Apr 10 22:07:20.037: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:8080/dial?request=hostname&protocol=http&host=10.244.2.18&port=8080&tries=1'] Namespace:pod-network-test-7362 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 22:07:20.037: INFO: >>> kubeConfig: /root/.kube/config I0410 22:07:20.073086 7 log.go:172] (0xc000bf29a0) (0xc002847d60) Create stream I0410 22:07:20.073236 7 log.go:172] (0xc000bf29a0) (0xc002847d60) Stream added, broadcasting: 1 I0410 22:07:20.076077 7 log.go:172] (0xc000bf29a0) Reply frame received for 1 I0410 22:07:20.076123 7 log.go:172] (0xc000bf29a0) (0xc00163d4a0) Create stream I0410 22:07:20.076138 7 log.go:172] (0xc000bf29a0) (0xc00163d4a0) Stream added, broadcasting: 3 I0410 22:07:20.077012 7 log.go:172] (0xc000bf29a0) Reply frame received for 3 I0410 22:07:20.077054 7 log.go:172] (0xc000bf29a0) (0xc00163d860) Create stream I0410 22:07:20.077066 7 log.go:172] (0xc000bf29a0) (0xc00163d860) Stream added, broadcasting: 5 I0410 22:07:20.078227 7 log.go:172] (0xc000bf29a0) Reply frame received for 5 I0410 22:07:20.128403 7 log.go:172] (0xc000bf29a0) Data frame received for 3 I0410 22:07:20.128427 7 log.go:172] (0xc00163d4a0) (3) Data frame handling I0410 22:07:20.128438 7 log.go:172] (0xc00163d4a0) (3) Data frame sent I0410 22:07:20.128740 7 log.go:172] (0xc000bf29a0) Data frame received for 5 I0410 22:07:20.128753 7 log.go:172] (0xc00163d860) (5) Data frame handling I0410 22:07:20.128886 7 log.go:172] (0xc000bf29a0) Data frame received for 3 I0410 22:07:20.128901 7 log.go:172] (0xc00163d4a0) (3) Data frame handling I0410 22:07:20.130824 7 log.go:172] (0xc000bf29a0) Data frame received for 1 I0410 22:07:20.130864 7 log.go:172] (0xc002847d60) (1) Data frame handling I0410 22:07:20.130889 7 log.go:172] (0xc002847d60) (1) Data frame sent I0410 22:07:20.130992 7 log.go:172] (0xc000bf29a0) (0xc002847d60) Stream removed, broadcasting: 1 I0410 22:07:20.131059 7 log.go:172] (0xc000bf29a0) Go away received I0410 22:07:20.131400 7 log.go:172] (0xc000bf29a0) (0xc002847d60) Stream removed, broadcasting: 1 I0410 22:07:20.131429 7 log.go:172] (0xc000bf29a0) (0xc00163d4a0) Stream removed, broadcasting: 3 I0410 22:07:20.131451 7 log.go:172] (0xc000bf29a0) (0xc00163d860) Stream removed, broadcasting: 5 Apr 10 22:07:20.131: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:07:20.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7362" for this suite. • [SLOW TEST:26.430 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3306,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:07:20.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:07:37.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2427" for this suite. • [SLOW TEST:17.120 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":211,"skipped":3311,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:07:37.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 22:07:37.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9851f781-1091-4714-8bf6-fab1ad9eed56" in namespace "downward-api-6832" to be "success or failure" Apr 10 22:07:37.416: INFO: Pod "downwardapi-volume-9851f781-1091-4714-8bf6-fab1ad9eed56": Phase="Pending", Reason="", readiness=false. Elapsed: 63.465712ms Apr 10 22:07:39.420: INFO: Pod "downwardapi-volume-9851f781-1091-4714-8bf6-fab1ad9eed56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067317274s Apr 10 22:07:41.425: INFO: Pod "downwardapi-volume-9851f781-1091-4714-8bf6-fab1ad9eed56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072290825s STEP: Saw pod success Apr 10 22:07:41.425: INFO: Pod "downwardapi-volume-9851f781-1091-4714-8bf6-fab1ad9eed56" satisfied condition "success or failure" Apr 10 22:07:41.429: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9851f781-1091-4714-8bf6-fab1ad9eed56 container client-container: STEP: delete the pod Apr 10 22:07:41.462: INFO: Waiting for pod downwardapi-volume-9851f781-1091-4714-8bf6-fab1ad9eed56 to disappear Apr 10 22:07:41.475: INFO: Pod downwardapi-volume-9851f781-1091-4714-8bf6-fab1ad9eed56 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:07:41.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6832" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3318,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:07:41.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 10 22:07:41.938: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 10 22:07:43.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153261, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153261, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153261, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153261, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 22:07:46.992: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:07:46.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:07:48.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9174" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.843 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":213,"skipped":3334,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:07:48.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:07:48.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 10 22:07:48.579: INFO: stderr: "" Apr 10 22:07:48.579: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:48:13Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:07:48.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7731" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":214,"skipped":3349,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:07:48.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 10 22:07:49.507: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 10 22:07:51.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153269, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153269, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153269, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153269, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 22:07:54.608: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:07:54.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:07:55.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7230" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.297 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":215,"skipped":3352,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:07:55.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 10 22:08:00.004: INFO: Pod pod-hostip-dfbb4615-28cf-4a4b-a322-3b38b7321e1a has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:08:00.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-452" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3368,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:08:00.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 22:08:00.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92b11c3b-9323-4ec5-b9d2-dc3444677a24" in namespace "downward-api-5033" to be "success or failure" Apr 10 22:08:00.116: INFO: Pod "downwardapi-volume-92b11c3b-9323-4ec5-b9d2-dc3444677a24": Phase="Pending", Reason="", readiness=false. Elapsed: 40.629575ms Apr 10 22:08:02.121: INFO: Pod "downwardapi-volume-92b11c3b-9323-4ec5-b9d2-dc3444677a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045371202s Apr 10 22:08:04.125: INFO: Pod "downwardapi-volume-92b11c3b-9323-4ec5-b9d2-dc3444677a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049543881s STEP: Saw pod success Apr 10 22:08:04.125: INFO: Pod "downwardapi-volume-92b11c3b-9323-4ec5-b9d2-dc3444677a24" satisfied condition "success or failure" Apr 10 22:08:04.128: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-92b11c3b-9323-4ec5-b9d2-dc3444677a24 container client-container: STEP: delete the pod Apr 10 22:08:04.150: INFO: Waiting for pod downwardapi-volume-92b11c3b-9323-4ec5-b9d2-dc3444677a24 to disappear Apr 10 22:08:04.160: INFO: Pod downwardapi-volume-92b11c3b-9323-4ec5-b9d2-dc3444677a24 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:08:04.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5033" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:08:04.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 10 22:08:04.251: INFO: Waiting up to 5m0s for pod "client-containers-3bc0583d-c687-440b-b14b-54bc94082291" in namespace "containers-7548" to be "success or failure" Apr 10 22:08:04.262: INFO: Pod "client-containers-3bc0583d-c687-440b-b14b-54bc94082291": Phase="Pending", Reason="", readiness=false. Elapsed: 10.559559ms Apr 10 22:08:06.266: INFO: Pod "client-containers-3bc0583d-c687-440b-b14b-54bc94082291": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014806588s Apr 10 22:08:08.271: INFO: Pod "client-containers-3bc0583d-c687-440b-b14b-54bc94082291": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019074852s STEP: Saw pod success Apr 10 22:08:08.271: INFO: Pod "client-containers-3bc0583d-c687-440b-b14b-54bc94082291" satisfied condition "success or failure" Apr 10 22:08:08.274: INFO: Trying to get logs from node jerma-worker2 pod client-containers-3bc0583d-c687-440b-b14b-54bc94082291 container test-container: STEP: delete the pod Apr 10 22:08:08.293: INFO: Waiting for pod client-containers-3bc0583d-c687-440b-b14b-54bc94082291 to disappear Apr 10 22:08:08.345: INFO: Pod client-containers-3bc0583d-c687-440b-b14b-54bc94082291 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:08:08.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7548" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3469,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:08:08.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-8fhk STEP: Creating a pod to test atomic-volume-subpath Apr 10 22:08:08.479: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8fhk" in namespace "subpath-6381" to be "success or failure" Apr 10 22:08:08.483: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054289ms Apr 10 22:08:10.487: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008040018s Apr 10 22:08:12.491: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 4.012150946s Apr 10 22:08:14.496: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 6.01640293s Apr 10 22:08:16.499: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 8.020197502s Apr 10 22:08:18.503: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 10.024266138s Apr 10 22:08:20.507: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 12.028232265s Apr 10 22:08:22.512: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 14.03256819s Apr 10 22:08:24.516: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 16.036893437s Apr 10 22:08:26.520: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 18.040657875s Apr 10 22:08:28.543: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 20.063911334s Apr 10 22:08:30.547: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Running", Reason="", readiness=true. Elapsed: 22.067995797s Apr 10 22:08:32.551: INFO: Pod "pod-subpath-test-projected-8fhk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071728478s STEP: Saw pod success Apr 10 22:08:32.551: INFO: Pod "pod-subpath-test-projected-8fhk" satisfied condition "success or failure" Apr 10 22:08:32.553: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-8fhk container test-container-subpath-projected-8fhk: STEP: delete the pod Apr 10 22:08:32.573: INFO: Waiting for pod pod-subpath-test-projected-8fhk to disappear Apr 10 22:08:32.578: INFO: Pod pod-subpath-test-projected-8fhk no longer exists STEP: Deleting pod pod-subpath-test-projected-8fhk Apr 10 22:08:32.578: INFO: Deleting pod "pod-subpath-test-projected-8fhk" in namespace "subpath-6381" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:08:32.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6381" for this suite. • [SLOW TEST:24.255 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":219,"skipped":3471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:08:32.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9311 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9311 I0410 22:08:32.776837 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9311, replica count: 2 I0410 22:08:35.827345 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0410 22:08:38.827568 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 10 22:08:38.827: INFO: Creating new exec pod Apr 10 22:08:43.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9311 execpod5dnfq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 10 22:08:44.106: INFO: stderr: "I0410 22:08:43.994338 4358 log.go:172] (0xc000970790) (0xc0005e1900) Create stream\nI0410 22:08:43.994417 4358 log.go:172] (0xc000970790) (0xc0005e1900) Stream added, broadcasting: 1\nI0410 22:08:43.997852 4358 log.go:172] (0xc000970790) Reply frame received for 1\nI0410 22:08:43.997894 4358 log.go:172] (0xc000970790) (0xc0005e1ae0) Create stream\nI0410 22:08:43.997905 4358 log.go:172] (0xc000970790) (0xc0005e1ae0) Stream added, broadcasting: 3\nI0410 22:08:43.999019 4358 log.go:172] (0xc000970790) Reply frame received for 3\nI0410 22:08:43.999071 4358 log.go:172] (0xc000970790) (0xc000928000) Create stream\nI0410 22:08:43.999098 4358 log.go:172] (0xc000970790) (0xc000928000) Stream added, broadcasting: 5\nI0410 22:08:44.000093 4358 log.go:172] (0xc000970790) Reply frame received for 5\nI0410 22:08:44.098113 4358 log.go:172] (0xc000970790) Data frame received for 5\nI0410 22:08:44.098143 4358 log.go:172] (0xc000928000) (5) Data frame handling\nI0410 22:08:44.098162 4358 log.go:172] (0xc000928000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0410 22:08:44.098618 4358 log.go:172] (0xc000970790) Data frame received for 5\nI0410 22:08:44.098671 4358 log.go:172] (0xc000928000) (5) Data frame handling\nI0410 22:08:44.098696 4358 log.go:172] (0xc000928000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0410 22:08:44.099058 4358 log.go:172] (0xc000970790) Data frame received for 3\nI0410 22:08:44.099089 4358 log.go:172] (0xc0005e1ae0) (3) Data frame handling\nI0410 22:08:44.099109 4358 log.go:172] (0xc000970790) Data frame received for 5\nI0410 22:08:44.099119 4358 log.go:172] (0xc000928000) (5) Data frame handling\nI0410 22:08:44.101364 4358 log.go:172] (0xc000970790) Data frame received for 1\nI0410 22:08:44.101399 4358 log.go:172] (0xc0005e1900) (1) Data frame handling\nI0410 22:08:44.101425 4358 log.go:172] (0xc0005e1900) (1) Data frame sent\nI0410 22:08:44.101454 4358 log.go:172] (0xc000970790) (0xc0005e1900) Stream removed, broadcasting: 1\nI0410 22:08:44.101504 4358 log.go:172] (0xc000970790) Go away received\nI0410 22:08:44.101906 4358 log.go:172] (0xc000970790) (0xc0005e1900) Stream removed, broadcasting: 1\nI0410 22:08:44.101930 4358 log.go:172] (0xc000970790) (0xc0005e1ae0) Stream removed, broadcasting: 3\nI0410 22:08:44.101955 4358 log.go:172] (0xc000970790) (0xc000928000) Stream removed, broadcasting: 5\n" Apr 10 22:08:44.106: INFO: stdout: "" Apr 10 22:08:44.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9311 execpod5dnfq -- /bin/sh -x -c nc -zv -t -w 2 10.107.11.5 80' Apr 10 22:08:44.313: INFO: stderr: "I0410 22:08:44.239447 4378 log.go:172] (0xc00059a000) (0xc000a50000) Create stream\nI0410 22:08:44.239520 4378 log.go:172] (0xc00059a000) (0xc000a50000) Stream added, broadcasting: 1\nI0410 22:08:44.242609 4378 log.go:172] (0xc00059a000) Reply frame received for 1\nI0410 22:08:44.242659 4378 log.go:172] (0xc00059a000) (0xc000663900) Create stream\nI0410 22:08:44.242672 4378 log.go:172] (0xc00059a000) (0xc000663900) Stream added, broadcasting: 3\nI0410 22:08:44.243594 4378 log.go:172] (0xc00059a000) Reply frame received for 3\nI0410 22:08:44.243640 4378 log.go:172] (0xc00059a000) (0xc0006639a0) Create stream\nI0410 22:08:44.243655 4378 log.go:172] (0xc00059a000) (0xc0006639a0) Stream added, broadcasting: 5\nI0410 22:08:44.244470 4378 log.go:172] (0xc00059a000) Reply frame received for 5\nI0410 22:08:44.307907 4378 log.go:172] (0xc00059a000) Data frame received for 3\nI0410 22:08:44.307939 4378 log.go:172] (0xc000663900) (3) Data frame handling\nI0410 22:08:44.307959 4378 log.go:172] (0xc00059a000) Data frame received for 5\nI0410 22:08:44.307966 4378 log.go:172] (0xc0006639a0) (5) Data frame handling\nI0410 22:08:44.307973 4378 log.go:172] (0xc0006639a0) (5) Data frame sent\nI0410 22:08:44.307979 4378 log.go:172] (0xc00059a000) Data frame received for 5\nI0410 22:08:44.307989 4378 log.go:172] (0xc0006639a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.11.5 80\nConnection to 10.107.11.5 80 port [tcp/http] succeeded!\nI0410 22:08:44.309634 4378 log.go:172] (0xc00059a000) Data frame received for 1\nI0410 22:08:44.309660 4378 log.go:172] (0xc000a50000) (1) Data frame handling\nI0410 22:08:44.309675 4378 log.go:172] (0xc000a50000) (1) Data frame sent\nI0410 22:08:44.309694 4378 log.go:172] (0xc00059a000) (0xc000a50000) Stream removed, broadcasting: 1\nI0410 22:08:44.309780 4378 log.go:172] (0xc00059a000) Go away received\nI0410 22:08:44.310067 4378 log.go:172] (0xc00059a000) (0xc000a50000) Stream removed, broadcasting: 1\nI0410 22:08:44.310089 4378 log.go:172] (0xc00059a000) (0xc000663900) Stream removed, broadcasting: 3\nI0410 22:08:44.310100 4378 log.go:172] (0xc00059a000) (0xc0006639a0) Stream removed, broadcasting: 5\n" Apr 10 22:08:44.313: INFO: stdout: "" Apr 10 22:08:44.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9311 execpod5dnfq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32761' Apr 10 22:08:44.526: INFO: stderr: "I0410 22:08:44.446164 4399 log.go:172] (0xc0007b6a50) (0xc0007b2000) Create stream\nI0410 22:08:44.446248 4399 log.go:172] (0xc0007b6a50) (0xc0007b2000) Stream added, broadcasting: 1\nI0410 22:08:44.449405 4399 log.go:172] (0xc0007b6a50) Reply frame received for 1\nI0410 22:08:44.449442 4399 log.go:172] (0xc0007b6a50) (0xc000665b80) Create stream\nI0410 22:08:44.449452 4399 log.go:172] (0xc0007b6a50) (0xc000665b80) Stream added, broadcasting: 3\nI0410 22:08:44.450259 4399 log.go:172] (0xc0007b6a50) Reply frame received for 3\nI0410 22:08:44.450288 4399 log.go:172] (0xc0007b6a50) (0xc000665d60) Create stream\nI0410 22:08:44.450297 4399 log.go:172] (0xc0007b6a50) (0xc000665d60) Stream added, broadcasting: 5\nI0410 22:08:44.451038 4399 log.go:172] (0xc0007b6a50) Reply frame received for 5\nI0410 22:08:44.519106 4399 log.go:172] (0xc0007b6a50) Data frame received for 3\nI0410 22:08:44.519184 4399 log.go:172] (0xc000665b80) (3) Data frame handling\nI0410 22:08:44.519219 4399 log.go:172] (0xc0007b6a50) Data frame received for 5\nI0410 22:08:44.519270 4399 log.go:172] (0xc000665d60) (5) Data frame handling\nI0410 22:08:44.519302 4399 log.go:172] (0xc000665d60) (5) Data frame sent\nI0410 22:08:44.519316 4399 log.go:172] (0xc0007b6a50) Data frame received for 5\nI0410 22:08:44.519330 4399 log.go:172] (0xc000665d60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32761\nConnection to 172.17.0.10 32761 port [tcp/32761] succeeded!\nI0410 22:08:44.521085 4399 log.go:172] (0xc0007b6a50) Data frame received for 1\nI0410 22:08:44.521274 4399 log.go:172] (0xc0007b2000) (1) Data frame handling\nI0410 22:08:44.521318 4399 log.go:172] (0xc0007b2000) (1) Data frame sent\nI0410 22:08:44.521463 4399 log.go:172] (0xc0007b6a50) (0xc0007b2000) Stream removed, broadcasting: 1\nI0410 22:08:44.521497 4399 log.go:172] (0xc0007b6a50) Go away received\nI0410 22:08:44.521901 4399 log.go:172] (0xc0007b6a50) (0xc0007b2000) Stream removed, broadcasting: 1\nI0410 22:08:44.521921 4399 log.go:172] (0xc0007b6a50) (0xc000665b80) Stream removed, broadcasting: 3\nI0410 22:08:44.521932 4399 log.go:172] (0xc0007b6a50) (0xc000665d60) Stream removed, broadcasting: 5\n" Apr 10 22:08:44.526: INFO: stdout: "" Apr 10 22:08:44.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9311 execpod5dnfq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32761' Apr 10 22:08:44.748: INFO: stderr: "I0410 22:08:44.657973 4420 log.go:172] (0xc000a2abb0) (0xc00092e460) Create stream\nI0410 22:08:44.658036 4420 log.go:172] (0xc000a2abb0) (0xc00092e460) Stream added, broadcasting: 1\nI0410 22:08:44.666633 4420 log.go:172] (0xc000a2abb0) Reply frame received for 1\nI0410 22:08:44.666687 4420 log.go:172] (0xc000a2abb0) (0xc00092e500) Create stream\nI0410 22:08:44.666707 4420 log.go:172] (0xc000a2abb0) (0xc00092e500) Stream added, broadcasting: 3\nI0410 22:08:44.671672 4420 log.go:172] (0xc000a2abb0) Reply frame received for 3\nI0410 22:08:44.671708 4420 log.go:172] (0xc000a2abb0) (0xc0009ee000) Create stream\nI0410 22:08:44.671724 4420 log.go:172] (0xc000a2abb0) (0xc0009ee000) Stream added, broadcasting: 5\nI0410 22:08:44.672382 4420 log.go:172] (0xc000a2abb0) Reply frame received for 5\nI0410 22:08:44.740180 4420 log.go:172] (0xc000a2abb0) Data frame received for 5\nI0410 22:08:44.740227 4420 log.go:172] (0xc0009ee000) (5) Data frame handling\nI0410 22:08:44.740245 4420 log.go:172] (0xc0009ee000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32761\nConnection to 172.17.0.8 32761 port [tcp/32761] succeeded!\nI0410 22:08:44.740271 4420 log.go:172] (0xc000a2abb0) Data frame received for 3\nI0410 22:08:44.740319 4420 log.go:172] (0xc00092e500) (3) Data frame handling\nI0410 22:08:44.740357 4420 log.go:172] (0xc000a2abb0) Data frame received for 5\nI0410 22:08:44.740379 4420 log.go:172] (0xc0009ee000) (5) Data frame handling\nI0410 22:08:44.742533 4420 log.go:172] (0xc000a2abb0) Data frame received for 1\nI0410 22:08:44.742563 4420 log.go:172] (0xc00092e460) (1) Data frame handling\nI0410 22:08:44.742599 4420 log.go:172] (0xc00092e460) (1) Data frame sent\nI0410 22:08:44.742624 4420 log.go:172] (0xc000a2abb0) (0xc00092e460) Stream removed, broadcasting: 1\nI0410 22:08:44.742662 4420 log.go:172] (0xc000a2abb0) Go away received\nI0410 22:08:44.743051 4420 log.go:172] (0xc000a2abb0) (0xc00092e460) Stream removed, broadcasting: 1\nI0410 22:08:44.743069 4420 log.go:172] (0xc000a2abb0) (0xc00092e500) Stream removed, broadcasting: 3\nI0410 22:08:44.743078 4420 log.go:172] (0xc000a2abb0) (0xc0009ee000) Stream removed, broadcasting: 5\n" Apr 10 22:08:44.748: INFO: stdout: "" Apr 10 22:08:44.748: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:08:44.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9311" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.168 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":220,"skipped":3496,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:08:44.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 10 22:08:49.425: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1368 pod-service-account-45f065f9-f47d-48a0-8950-e3ec50e84ce4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 10 22:08:49.669: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1368 pod-service-account-45f065f9-f47d-48a0-8950-e3ec50e84ce4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 10 22:08:49.872: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1368 pod-service-account-45f065f9-f47d-48a0-8950-e3ec50e84ce4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:08:50.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1368" for this suite. • [SLOW TEST:5.416 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":221,"skipped":3503,"failed":0} [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:08:50.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-7f0330a7-3399-4c3d-b39d-01057cf4a239 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-7f0330a7-3399-4c3d-b39d-01057cf4a239 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:08:56.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6686" for this suite. • [SLOW TEST:6.374 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3503,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:08:56.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 10 22:08:56.602: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 10 22:08:56.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5123' Apr 10 22:08:56.934: INFO: stderr: "" Apr 10 22:08:56.934: INFO: stdout: "service/agnhost-slave created\n" Apr 10 22:08:56.934: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 10 22:08:56.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5123' Apr 10 22:08:57.220: INFO: stderr: "" Apr 10 22:08:57.220: INFO: stdout: "service/agnhost-master created\n" Apr 10 22:08:57.220: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 10 22:08:57.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5123' Apr 10 22:08:57.487: INFO: stderr: "" Apr 10 22:08:57.487: INFO: stdout: "service/frontend created\n" Apr 10 22:08:57.488: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 10 22:08:57.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5123' Apr 10 22:08:57.950: INFO: stderr: "" Apr 10 22:08:57.950: INFO: stdout: "deployment.apps/frontend created\n" Apr 10 22:08:57.951: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 10 22:08:57.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5123' Apr 10 22:08:58.364: INFO: stderr: "" Apr 10 22:08:58.364: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 10 22:08:58.364: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 10 22:08:58.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5123' Apr 10 22:08:58.681: INFO: stderr: "" Apr 10 22:08:58.681: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 10 22:08:58.681: INFO: Waiting for all frontend pods to be Running. Apr 10 22:09:08.732: INFO: Waiting for frontend to serve content. Apr 10 22:09:08.742: INFO: Trying to add a new entry to the guestbook. Apr 10 22:09:08.751: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 10 22:09:08.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5123' Apr 10 22:09:08.901: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 22:09:08.901: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 10 22:09:08.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5123' Apr 10 22:09:09.054: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 22:09:09.054: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 10 22:09:09.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5123' Apr 10 22:09:09.172: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 22:09:09.172: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 10 22:09:09.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5123' Apr 10 22:09:09.273: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 22:09:09.273: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 10 22:09:09.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5123' Apr 10 22:09:09.413: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 22:09:09.413: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 10 22:09:09.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5123' Apr 10 22:09:09.527: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 10 22:09:09.527: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:09:09.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5123" for this suite. • [SLOW TEST:12.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":223,"skipped":3523,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:09:09.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0410 22:09:50.108943 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 10 22:09:50.109: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:09:50.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1167" for this suite. • [SLOW TEST:40.557 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":224,"skipped":3530,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:09:50.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 22:09:50.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0d4c1fc-281f-4e57-b417-393b98f82d6f" in namespace "projected-4810" to be "success or failure" Apr 10 22:09:50.198: INFO: Pod "downwardapi-volume-d0d4c1fc-281f-4e57-b417-393b98f82d6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.911894ms Apr 10 22:09:52.366: INFO: Pod "downwardapi-volume-d0d4c1fc-281f-4e57-b417-393b98f82d6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172121956s Apr 10 22:09:54.371: INFO: Pod "downwardapi-volume-d0d4c1fc-281f-4e57-b417-393b98f82d6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.176824775s STEP: Saw pod success Apr 10 22:09:54.371: INFO: Pod "downwardapi-volume-d0d4c1fc-281f-4e57-b417-393b98f82d6f" satisfied condition "success or failure" Apr 10 22:09:54.374: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d0d4c1fc-281f-4e57-b417-393b98f82d6f container client-container: STEP: delete the pod Apr 10 22:09:54.412: INFO: Waiting for pod downwardapi-volume-d0d4c1fc-281f-4e57-b417-393b98f82d6f to disappear Apr 10 22:09:54.426: INFO: Pod downwardapi-volume-d0d4c1fc-281f-4e57-b417-393b98f82d6f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:09:54.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4810" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:09:54.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-6977/secret-test-2d3ec6de-f081-4457-90f9-7e1b1417382a STEP: Creating a pod to test consume secrets Apr 10 22:09:54.508: INFO: Waiting up to 5m0s for pod "pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850" in namespace "secrets-6977" to be "success or failure" Apr 10 22:09:54.510: INFO: Pod "pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02586ms Apr 10 22:09:56.701: INFO: Pod "pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19325175s Apr 10 22:09:58.846: INFO: Pod "pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850": Phase="Running", Reason="", readiness=true. Elapsed: 4.337994064s Apr 10 22:10:00.849: INFO: Pod "pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.341743544s STEP: Saw pod success Apr 10 22:10:00.849: INFO: Pod "pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850" satisfied condition "success or failure" Apr 10 22:10:00.852: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850 container env-test: STEP: delete the pod Apr 10 22:10:00.866: INFO: Waiting for pod pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850 to disappear Apr 10 22:10:00.871: INFO: Pod pod-configmaps-78ebcae0-3551-499f-a6a0-9f278aeaa850 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:10:00.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6977" for this suite. • [SLOW TEST:6.444 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:10:00.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:10:00.975: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:10:01.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9721" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":227,"skipped":3620,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:10:01.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-tjcm STEP: Creating a pod to test atomic-volume-subpath Apr 10 22:10:01.746: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tjcm" in namespace "subpath-4888" to be "success or failure" Apr 10 22:10:01.755: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Pending", Reason="", readiness=false. Elapsed: 9.25981ms Apr 10 22:10:03.758: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012830385s Apr 10 22:10:05.772: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 4.026073781s Apr 10 22:10:07.790: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 6.044395863s Apr 10 22:10:09.802: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 8.056429179s Apr 10 22:10:11.807: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 10.061000028s Apr 10 22:10:13.814: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 12.068513546s Apr 10 22:10:15.817: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 14.071929785s Apr 10 22:10:17.821: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 16.075585714s Apr 10 22:10:19.843: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 18.097539704s Apr 10 22:10:21.862: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 20.116535737s Apr 10 22:10:23.868: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Running", Reason="", readiness=true. Elapsed: 22.122551681s Apr 10 22:10:25.872: INFO: Pod "pod-subpath-test-configmap-tjcm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.126745806s STEP: Saw pod success Apr 10 22:10:25.872: INFO: Pod "pod-subpath-test-configmap-tjcm" satisfied condition "success or failure" Apr 10 22:10:25.875: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-tjcm container test-container-subpath-configmap-tjcm: STEP: delete the pod Apr 10 22:10:25.910: INFO: Waiting for pod pod-subpath-test-configmap-tjcm to disappear Apr 10 22:10:25.920: INFO: Pod pod-subpath-test-configmap-tjcm no longer exists STEP: Deleting pod pod-subpath-test-configmap-tjcm Apr 10 22:10:25.920: INFO: Deleting pod "pod-subpath-test-configmap-tjcm" in namespace "subpath-4888" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:10:25.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4888" for this suite. • [SLOW TEST:24.314 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":228,"skipped":3641,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:10:25.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8715.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8715.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8715.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8715.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8715.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8715.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 22:10:32.139: INFO: DNS probes using dns-8715/dns-test-775de0c3-4838-477f-a32b-30d64c001607 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:10:32.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8715" for this suite. • [SLOW TEST:6.324 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":229,"skipped":3659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:10:32.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 22:10:33.247: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 22:10:35.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153433, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153433, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153433, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153433, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 22:10:38.294: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:10:38.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:10:39.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9172" for this suite. STEP: Destroying namespace "webhook-9172-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.245 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":230,"skipped":3684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:10:39.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 10 22:10:39.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9139' Apr 10 22:10:39.701: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 10 22:10:39.701: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Apr 10 22:10:39.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-9139' Apr 10 22:10:39.871: INFO: stderr: "" Apr 10 22:10:39.871: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:10:39.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9139" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":231,"skipped":3726,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:10:39.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 22:10:41.179: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 22:10:43.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153441, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153441, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153441, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153441, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 22:10:46.258: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:10:46.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3925-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:10:47.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8246" for this suite. STEP: Destroying namespace "webhook-8246-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.634 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":232,"skipped":3744,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:10:47.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8844 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 10 22:10:47.571: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 10 22:11:15.720: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.184:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8844 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 22:11:15.720: INFO: >>> kubeConfig: /root/.kube/config I0410 22:11:15.757842 7 log.go:172] (0xc00169c6e0) (0xc002310a00) Create stream I0410 22:11:15.757875 7 log.go:172] (0xc00169c6e0) (0xc002310a00) Stream added, broadcasting: 1 I0410 22:11:15.760764 7 log.go:172] (0xc00169c6e0) Reply frame received for 1 I0410 22:11:15.760815 7 log.go:172] (0xc00169c6e0) (0xc002735c20) Create stream I0410 22:11:15.760832 7 log.go:172] (0xc00169c6e0) (0xc002735c20) Stream added, broadcasting: 3 I0410 22:11:15.762128 7 log.go:172] (0xc00169c6e0) Reply frame received for 3 I0410 22:11:15.762159 7 log.go:172] (0xc00169c6e0) (0xc002735d60) Create stream I0410 22:11:15.762167 7 log.go:172] (0xc00169c6e0) (0xc002735d60) Stream added, broadcasting: 5 I0410 22:11:15.763042 7 log.go:172] (0xc00169c6e0) Reply frame received for 5 I0410 22:11:15.831725 7 log.go:172] (0xc00169c6e0) Data frame received for 3 I0410 22:11:15.831754 7 log.go:172] (0xc002735c20) (3) Data frame handling I0410 22:11:15.831769 7 log.go:172] (0xc002735c20) (3) Data frame sent I0410 22:11:15.831937 7 log.go:172] (0xc00169c6e0) Data frame received for 5 I0410 22:11:15.831973 7 log.go:172] (0xc002735d60) (5) Data frame handling I0410 22:11:15.832157 7 log.go:172] (0xc00169c6e0) Data frame received for 3 I0410 22:11:15.832191 7 log.go:172] (0xc002735c20) (3) Data frame handling I0410 22:11:15.834281 7 log.go:172] (0xc00169c6e0) Data frame received for 1 I0410 22:11:15.834313 7 log.go:172] (0xc002310a00) (1) Data frame handling I0410 22:11:15.834343 7 log.go:172] (0xc002310a00) (1) Data frame sent I0410 22:11:15.834368 7 log.go:172] (0xc00169c6e0) (0xc002310a00) Stream removed, broadcasting: 1 I0410 22:11:15.834473 7 log.go:172] (0xc00169c6e0) Go away received I0410 22:11:15.834525 7 log.go:172] (0xc00169c6e0) (0xc002310a00) Stream removed, broadcasting: 1 I0410 22:11:15.834549 7 log.go:172] (0xc00169c6e0) (0xc002735c20) Stream removed, broadcasting: 3 I0410 22:11:15.834566 7 log.go:172] (0xc00169c6e0) (0xc002735d60) Stream removed, broadcasting: 5 Apr 10 22:11:15.834: INFO: Found all expected endpoints: [netserver-0] Apr 10 22:11:15.838: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.39:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8844 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 22:11:15.838: INFO: >>> kubeConfig: /root/.kube/config I0410 22:11:15.866078 7 log.go:172] (0xc00169cd10) (0xc002310c80) Create stream I0410 22:11:15.866106 7 log.go:172] (0xc00169cd10) (0xc002310c80) Stream added, broadcasting: 1 I0410 22:11:15.867956 7 log.go:172] (0xc00169cd10) Reply frame received for 1 I0410 22:11:15.867982 7 log.go:172] (0xc00169cd10) (0xc0021e9040) Create stream I0410 22:11:15.867994 7 log.go:172] (0xc00169cd10) (0xc0021e9040) Stream added, broadcasting: 3 I0410 22:11:15.868807 7 log.go:172] (0xc00169cd10) Reply frame received for 3 I0410 22:11:15.868838 7 log.go:172] (0xc00169cd10) (0xc002310d20) Create stream I0410 22:11:15.868847 7 log.go:172] (0xc00169cd10) (0xc002310d20) Stream added, broadcasting: 5 I0410 22:11:15.869810 7 log.go:172] (0xc00169cd10) Reply frame received for 5 I0410 22:11:15.949973 7 log.go:172] (0xc00169cd10) Data frame received for 3 I0410 22:11:15.950013 7 log.go:172] (0xc0021e9040) (3) Data frame handling I0410 22:11:15.950052 7 log.go:172] (0xc0021e9040) (3) Data frame sent I0410 22:11:15.950478 7 log.go:172] (0xc00169cd10) Data frame received for 5 I0410 22:11:15.950529 7 log.go:172] (0xc002310d20) (5) Data frame handling I0410 22:11:15.950562 7 log.go:172] (0xc00169cd10) Data frame received for 3 I0410 22:11:15.950579 7 log.go:172] (0xc0021e9040) (3) Data frame handling I0410 22:11:15.951799 7 log.go:172] (0xc00169cd10) Data frame received for 1 I0410 22:11:15.951831 7 log.go:172] (0xc002310c80) (1) Data frame handling I0410 22:11:15.951869 7 log.go:172] (0xc002310c80) (1) Data frame sent I0410 22:11:15.951890 7 log.go:172] (0xc00169cd10) (0xc002310c80) Stream removed, broadcasting: 1 I0410 22:11:15.951912 7 log.go:172] (0xc00169cd10) Go away received I0410 22:11:15.952028 7 log.go:172] (0xc00169cd10) (0xc002310c80) Stream removed, broadcasting: 1 I0410 22:11:15.952070 7 log.go:172] (0xc00169cd10) (0xc0021e9040) Stream removed, broadcasting: 3 I0410 22:11:15.952084 7 log.go:172] (0xc00169cd10) (0xc002310d20) Stream removed, broadcasting: 5 Apr 10 22:11:15.952: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:11:15.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8844" for this suite. • [SLOW TEST:28.443 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3753,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:11:15.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 22:11:16.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66fd026f-2466-4388-9eb8-94752b578738" in namespace "projected-4783" to be "success or failure" Apr 10 22:11:16.054: INFO: Pod "downwardapi-volume-66fd026f-2466-4388-9eb8-94752b578738": Phase="Pending", Reason="", readiness=false. Elapsed: 15.285537ms Apr 10 22:11:18.060: INFO: Pod "downwardapi-volume-66fd026f-2466-4388-9eb8-94752b578738": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021958924s Apr 10 22:11:20.065: INFO: Pod "downwardapi-volume-66fd026f-2466-4388-9eb8-94752b578738": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026536817s STEP: Saw pod success Apr 10 22:11:20.065: INFO: Pod "downwardapi-volume-66fd026f-2466-4388-9eb8-94752b578738" satisfied condition "success or failure" Apr 10 22:11:20.068: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-66fd026f-2466-4388-9eb8-94752b578738 container client-container: STEP: delete the pod Apr 10 22:11:20.091: INFO: Waiting for pod downwardapi-volume-66fd026f-2466-4388-9eb8-94752b578738 to disappear Apr 10 22:11:20.095: INFO: Pod downwardapi-volume-66fd026f-2466-4388-9eb8-94752b578738 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:11:20.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4783" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3755,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:11:20.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-dc28aa2e-da1a-40a8-993f-7086caefc6ce STEP: Creating a pod to test consume secrets Apr 10 22:11:20.187: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3" in namespace "projected-8122" to be "success or failure" Apr 10 22:11:20.222: INFO: Pod "pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 35.652476ms Apr 10 22:11:22.350: INFO: Pod "pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16311762s Apr 10 22:11:24.354: INFO: Pod "pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3": Phase="Running", Reason="", readiness=true. Elapsed: 4.16771622s Apr 10 22:11:26.359: INFO: Pod "pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.171827936s STEP: Saw pod success Apr 10 22:11:26.359: INFO: Pod "pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3" satisfied condition "success or failure" Apr 10 22:11:26.362: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3 container projected-secret-volume-test: STEP: delete the pod Apr 10 22:11:26.397: INFO: Waiting for pod pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3 to disappear Apr 10 22:11:26.410: INFO: Pod pod-projected-secrets-83243b3b-76cf-46bf-844b-ff9839284cf3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:11:26.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8122" for this suite. • [SLOW TEST:6.328 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3764,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:11:26.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 22:11:26.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae6dfb8a-720f-4e7a-9e6b-083379b97844" in namespace "downward-api-6688" to be "success or failure" Apr 10 22:11:26.500: INFO: Pod "downwardapi-volume-ae6dfb8a-720f-4e7a-9e6b-083379b97844": Phase="Pending", Reason="", readiness=false. Elapsed: 9.797148ms Apr 10 22:11:28.522: INFO: Pod "downwardapi-volume-ae6dfb8a-720f-4e7a-9e6b-083379b97844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031298554s Apr 10 22:11:30.526: INFO: Pod "downwardapi-volume-ae6dfb8a-720f-4e7a-9e6b-083379b97844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036122395s STEP: Saw pod success Apr 10 22:11:30.526: INFO: Pod "downwardapi-volume-ae6dfb8a-720f-4e7a-9e6b-083379b97844" satisfied condition "success or failure" Apr 10 22:11:30.529: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ae6dfb8a-720f-4e7a-9e6b-083379b97844 container client-container: STEP: delete the pod Apr 10 22:11:30.572: INFO: Waiting for pod downwardapi-volume-ae6dfb8a-720f-4e7a-9e6b-083379b97844 to disappear Apr 10 22:11:30.590: INFO: Pod downwardapi-volume-ae6dfb8a-720f-4e7a-9e6b-083379b97844 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:11:30.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6688" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3780,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:11:30.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:11:30.705: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"23af73fd-40c5-4731-8600-9e89a72fe7a6", Controller:(*bool)(0xc0010aae32), BlockOwnerDeletion:(*bool)(0xc0010aae33)}} Apr 10 22:11:30.746: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"333da577-cbcf-4cbf-9d86-95aed2a75474", Controller:(*bool)(0xc0046fb492), BlockOwnerDeletion:(*bool)(0xc0046fb493)}} Apr 10 22:11:30.758: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"15fe5be8-e733-4c7d-a0c3-8603b061a8b5", Controller:(*bool)(0xc0010ab0aa), BlockOwnerDeletion:(*bool)(0xc0010ab0ab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:11:35.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3909" for this suite. • [SLOW TEST:5.226 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":237,"skipped":3848,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:11:35.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 10 22:11:35.897: INFO: Waiting up to 5m0s for pod "pod-b11213ba-9ea9-493c-a40f-e36105f6e685" in namespace "emptydir-3859" to be "success or failure" Apr 10 22:11:35.907: INFO: Pod "pod-b11213ba-9ea9-493c-a40f-e36105f6e685": Phase="Pending", Reason="", readiness=false. Elapsed: 9.954613ms Apr 10 22:11:37.959: INFO: Pod "pod-b11213ba-9ea9-493c-a40f-e36105f6e685": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062050551s Apr 10 22:11:39.964: INFO: Pod "pod-b11213ba-9ea9-493c-a40f-e36105f6e685": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066355162s STEP: Saw pod success Apr 10 22:11:39.964: INFO: Pod "pod-b11213ba-9ea9-493c-a40f-e36105f6e685" satisfied condition "success or failure" Apr 10 22:11:39.967: INFO: Trying to get logs from node jerma-worker pod pod-b11213ba-9ea9-493c-a40f-e36105f6e685 container test-container: STEP: delete the pod Apr 10 22:11:39.983: INFO: Waiting for pod pod-b11213ba-9ea9-493c-a40f-e36105f6e685 to disappear Apr 10 22:11:39.988: INFO: Pod pod-b11213ba-9ea9-493c-a40f-e36105f6e685 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:11:39.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3859" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3853,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:11:39.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 10 22:11:44.629: INFO: Successfully updated pod "labelsupdate8323110f-cd19-4549-9eec-b8c0fd0259ee" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:11:46.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5784" for this suite. • [SLOW TEST:6.668 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3857,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:11:46.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-7ea603f6-7cc4-497e-bbc6-5b26c0e97653 in namespace container-probe-7496 Apr 10 22:11:50.790: INFO: Started pod test-webserver-7ea603f6-7cc4-497e-bbc6-5b26c0e97653 in namespace container-probe-7496 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 22:11:50.794: INFO: Initial restart count of pod test-webserver-7ea603f6-7cc4-497e-bbc6-5b26c0e97653 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:15:51.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7496" for this suite. • [SLOW TEST:245.217 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3861,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:15:51.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-ccb23060-37ee-4a82-8d6b-ac91a6fd6cc2 in namespace container-probe-3196 Apr 10 22:15:56.082: INFO: Started pod liveness-ccb23060-37ee-4a82-8d6b-ac91a6fd6cc2 in namespace container-probe-3196 STEP: checking the pod's current state and verifying that restartCount is present Apr 10 22:15:56.084: INFO: Initial restart count of pod liveness-ccb23060-37ee-4a82-8d6b-ac91a6fd6cc2 is 0 Apr 10 22:16:14.141: INFO: Restart count of pod container-probe-3196/liveness-ccb23060-37ee-4a82-8d6b-ac91a6fd6cc2 is now 1 (18.057494228s elapsed) Apr 10 22:16:34.183: INFO: Restart count of pod container-probe-3196/liveness-ccb23060-37ee-4a82-8d6b-ac91a6fd6cc2 is now 2 (38.098864578s elapsed) Apr 10 22:16:54.225: INFO: Restart count of pod container-probe-3196/liveness-ccb23060-37ee-4a82-8d6b-ac91a6fd6cc2 is now 3 (58.141403654s elapsed) Apr 10 22:17:14.286: INFO: Restart count of pod container-probe-3196/liveness-ccb23060-37ee-4a82-8d6b-ac91a6fd6cc2 is now 4 (1m18.201953371s elapsed) Apr 10 22:18:16.421: INFO: Restart count of pod container-probe-3196/liveness-ccb23060-37ee-4a82-8d6b-ac91a6fd6cc2 is now 5 (2m20.337463128s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:18:16.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3196" for this suite. • [SLOW TEST:144.558 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3873,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:18:16.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cc124114-3478-42bf-88f1-84eab996423d STEP: Creating a pod to test consume secrets Apr 10 22:18:16.537: INFO: Waiting up to 5m0s for pod "pod-secrets-7a2e712a-6559-4614-8b51-7b7d0d58166a" in namespace "secrets-9197" to be "success or failure" Apr 10 22:18:16.566: INFO: Pod "pod-secrets-7a2e712a-6559-4614-8b51-7b7d0d58166a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.758945ms Apr 10 22:18:18.570: INFO: Pod "pod-secrets-7a2e712a-6559-4614-8b51-7b7d0d58166a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032617388s Apr 10 22:18:20.574: INFO: Pod "pod-secrets-7a2e712a-6559-4614-8b51-7b7d0d58166a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036903807s STEP: Saw pod success Apr 10 22:18:20.574: INFO: Pod "pod-secrets-7a2e712a-6559-4614-8b51-7b7d0d58166a" satisfied condition "success or failure" Apr 10 22:18:20.577: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7a2e712a-6559-4614-8b51-7b7d0d58166a container secret-volume-test: STEP: delete the pod Apr 10 22:18:20.603: INFO: Waiting for pod pod-secrets-7a2e712a-6559-4614-8b51-7b7d0d58166a to disappear Apr 10 22:18:20.607: INFO: Pod pod-secrets-7a2e712a-6559-4614-8b51-7b7d0d58166a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:18:20.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9197" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:18:20.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:18:36.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5655" for this suite. • [SLOW TEST:16.200 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":243,"skipped":3906,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:18:36.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 10 22:18:36.877: INFO: >>> kubeConfig: /root/.kube/config Apr 10 22:18:38.768: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:18:50.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6330" for this suite. • [SLOW TEST:13.505 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":244,"skipped":3928,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:18:50.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 22:18:50.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0070f745-7bc8-40f0-81b4-d5b101ed8909" in namespace "projected-3372" to be "success or failure" Apr 10 22:18:50.398: INFO: Pod "downwardapi-volume-0070f745-7bc8-40f0-81b4-d5b101ed8909": Phase="Pending", Reason="", readiness=false. Elapsed: 3.297018ms Apr 10 22:18:52.402: INFO: Pod "downwardapi-volume-0070f745-7bc8-40f0-81b4-d5b101ed8909": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007253813s Apr 10 22:18:54.407: INFO: Pod "downwardapi-volume-0070f745-7bc8-40f0-81b4-d5b101ed8909": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011665077s STEP: Saw pod success Apr 10 22:18:54.407: INFO: Pod "downwardapi-volume-0070f745-7bc8-40f0-81b4-d5b101ed8909" satisfied condition "success or failure" Apr 10 22:18:54.410: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0070f745-7bc8-40f0-81b4-d5b101ed8909 container client-container: STEP: delete the pod Apr 10 22:18:54.430: INFO: Waiting for pod downwardapi-volume-0070f745-7bc8-40f0-81b4-d5b101ed8909 to disappear Apr 10 22:18:54.447: INFO: Pod downwardapi-volume-0070f745-7bc8-40f0-81b4-d5b101ed8909 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:18:54.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3372" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3943,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:18:54.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-114 to expose endpoints map[] Apr 10 22:18:54.579: INFO: Get endpoints failed (23.743176ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 10 22:18:55.583: INFO: successfully validated that service multi-endpoint-test in namespace services-114 exposes endpoints map[] (1.027226649s elapsed) STEP: Creating pod pod1 in namespace services-114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-114 to expose endpoints map[pod1:[100]] Apr 10 22:18:58.656: INFO: successfully validated that service multi-endpoint-test in namespace services-114 exposes endpoints map[pod1:[100]] (3.065852432s elapsed) STEP: Creating pod pod2 in namespace services-114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-114 to expose endpoints map[pod1:[100] pod2:[101]] Apr 10 22:19:01.812: INFO: successfully validated that service multi-endpoint-test in namespace services-114 exposes endpoints map[pod1:[100] pod2:[101]] (3.152132358s elapsed) STEP: Deleting pod pod1 in namespace services-114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-114 to expose endpoints map[pod2:[101]] Apr 10 22:19:02.831: INFO: successfully validated that service multi-endpoint-test in namespace services-114 exposes endpoints map[pod2:[101]] (1.015036218s elapsed) STEP: Deleting pod pod2 in namespace services-114 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-114 to expose endpoints map[] Apr 10 22:19:03.899: INFO: successfully validated that service multi-endpoint-test in namespace services-114 exposes endpoints map[] (1.063808563s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:03.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-114" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.636 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":246,"skipped":3948,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:04.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:04.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9828" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3960,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:04.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:19:04.440: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 10 22:19:07.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5321 create -f -' Apr 10 22:19:10.483: INFO: stderr: "" Apr 10 22:19:10.483: INFO: stdout: "e2e-test-crd-publish-openapi-4836-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 10 22:19:10.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5321 delete e2e-test-crd-publish-openapi-4836-crds test-cr' Apr 10 22:19:10.608: INFO: stderr: "" Apr 10 22:19:10.608: INFO: stdout: "e2e-test-crd-publish-openapi-4836-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 10 22:19:10.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5321 apply -f -' Apr 10 22:19:10.867: INFO: stderr: "" Apr 10 22:19:10.867: INFO: stdout: "e2e-test-crd-publish-openapi-4836-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 10 22:19:10.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5321 delete e2e-test-crd-publish-openapi-4836-crds test-cr' Apr 10 22:19:10.984: INFO: stderr: "" Apr 10 22:19:10.984: INFO: stdout: "e2e-test-crd-publish-openapi-4836-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 10 22:19:10.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4836-crds' Apr 10 22:19:11.228: INFO: stderr: "" Apr 10 22:19:11.228: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4836-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:14.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5321" for this suite. • [SLOW TEST:9.920 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":248,"skipped":3963,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:14.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-dcd58541-f50f-489f-8c91-0cd068b6b40e [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:14.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3279" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":249,"skipped":3991,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:14.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e34dca50-d466-4bc2-b9b5-0abe133a95b3 STEP: Creating a pod to test consume secrets Apr 10 22:19:14.287: INFO: Waiting up to 5m0s for pod "pod-secrets-fb5cefcf-109e-44f8-9d0b-2e5fab320a36" in namespace "secrets-2103" to be "success or failure" Apr 10 22:19:14.291: INFO: Pod "pod-secrets-fb5cefcf-109e-44f8-9d0b-2e5fab320a36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17608ms Apr 10 22:19:16.318: INFO: Pod "pod-secrets-fb5cefcf-109e-44f8-9d0b-2e5fab320a36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031334558s Apr 10 22:19:18.343: INFO: Pod "pod-secrets-fb5cefcf-109e-44f8-9d0b-2e5fab320a36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055692163s STEP: Saw pod success Apr 10 22:19:18.343: INFO: Pod "pod-secrets-fb5cefcf-109e-44f8-9d0b-2e5fab320a36" satisfied condition "success or failure" Apr 10 22:19:18.346: INFO: Trying to get logs from node jerma-worker pod pod-secrets-fb5cefcf-109e-44f8-9d0b-2e5fab320a36 container secret-volume-test: STEP: delete the pod Apr 10 22:19:18.393: INFO: Waiting for pod pod-secrets-fb5cefcf-109e-44f8-9d0b-2e5fab320a36 to disappear Apr 10 22:19:18.540: INFO: Pod pod-secrets-fb5cefcf-109e-44f8-9d0b-2e5fab320a36 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:18.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2103" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4013,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:18.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 10 22:19:18.663: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3510dec-5adc-477a-b1b1-fe43a67c6135" in namespace "projected-9584" to be "success or failure" Apr 10 22:19:18.669: INFO: Pod "downwardapi-volume-c3510dec-5adc-477a-b1b1-fe43a67c6135": Phase="Pending", Reason="", readiness=false. Elapsed: 5.611513ms Apr 10 22:19:20.673: INFO: Pod "downwardapi-volume-c3510dec-5adc-477a-b1b1-fe43a67c6135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00998017s Apr 10 22:19:22.677: INFO: Pod "downwardapi-volume-c3510dec-5adc-477a-b1b1-fe43a67c6135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014273136s STEP: Saw pod success Apr 10 22:19:22.677: INFO: Pod "downwardapi-volume-c3510dec-5adc-477a-b1b1-fe43a67c6135" satisfied condition "success or failure" Apr 10 22:19:22.681: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c3510dec-5adc-477a-b1b1-fe43a67c6135 container client-container: STEP: delete the pod Apr 10 22:19:22.724: INFO: Waiting for pod downwardapi-volume-c3510dec-5adc-477a-b1b1-fe43a67c6135 to disappear Apr 10 22:19:22.735: INFO: Pod downwardapi-volume-c3510dec-5adc-477a-b1b1-fe43a67c6135 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:22.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9584" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4015,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:22.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 10 22:19:22.820: INFO: Waiting up to 5m0s for pod "downward-api-49940f6e-d7b7-4955-ba0b-fd2e905b9ca1" in namespace "downward-api-8955" to be "success or failure" Apr 10 22:19:22.836: INFO: Pod "downward-api-49940f6e-d7b7-4955-ba0b-fd2e905b9ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.925137ms Apr 10 22:19:24.876: INFO: Pod "downward-api-49940f6e-d7b7-4955-ba0b-fd2e905b9ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05504673s Apr 10 22:19:26.879: INFO: Pod "downward-api-49940f6e-d7b7-4955-ba0b-fd2e905b9ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058750539s STEP: Saw pod success Apr 10 22:19:26.879: INFO: Pod "downward-api-49940f6e-d7b7-4955-ba0b-fd2e905b9ca1" satisfied condition "success or failure" Apr 10 22:19:26.882: INFO: Trying to get logs from node jerma-worker pod downward-api-49940f6e-d7b7-4955-ba0b-fd2e905b9ca1 container dapi-container: STEP: delete the pod Apr 10 22:19:26.904: INFO: Waiting for pod downward-api-49940f6e-d7b7-4955-ba0b-fd2e905b9ca1 to disappear Apr 10 22:19:26.914: INFO: Pod downward-api-49940f6e-d7b7-4955-ba0b-fd2e905b9ca1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:26.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8955" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4032,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:26.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:19:26.983: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:31.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5461" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4038,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:31.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 10 22:19:31.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 10 22:19:31.383: INFO: stderr: "" Apr 10 22:19:31.383: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:31.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9333" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":254,"skipped":4051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:31.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 10 22:19:31.506: INFO: Waiting up to 5m0s for pod "pod-f406f55d-fe9d-4ff9-a5b8-4598c005e7b1" in namespace "emptydir-6612" to be "success or failure" Apr 10 22:19:31.525: INFO: Pod "pod-f406f55d-fe9d-4ff9-a5b8-4598c005e7b1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.154154ms Apr 10 22:19:33.528: INFO: Pod "pod-f406f55d-fe9d-4ff9-a5b8-4598c005e7b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022772907s Apr 10 22:19:35.532: INFO: Pod "pod-f406f55d-fe9d-4ff9-a5b8-4598c005e7b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026731108s STEP: Saw pod success Apr 10 22:19:35.532: INFO: Pod "pod-f406f55d-fe9d-4ff9-a5b8-4598c005e7b1" satisfied condition "success or failure" Apr 10 22:19:35.535: INFO: Trying to get logs from node jerma-worker pod pod-f406f55d-fe9d-4ff9-a5b8-4598c005e7b1 container test-container: STEP: delete the pod Apr 10 22:19:35.668: INFO: Waiting for pod pod-f406f55d-fe9d-4ff9-a5b8-4598c005e7b1 to disappear Apr 10 22:19:35.675: INFO: Pod pod-f406f55d-fe9d-4ff9-a5b8-4598c005e7b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:35.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6612" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4116,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:35.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 10 22:19:43.807: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 22:19:43.813: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 22:19:45.814: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 22:19:45.818: INFO: Pod pod-with-prestop-exec-hook still exists Apr 10 22:19:47.814: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 10 22:19:47.817: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:47.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3792" for this suite. • [SLOW TEST:12.147 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4116,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:47.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 22:19:48.286: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 22:19:50.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153988, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153988, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153988, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722153988, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 22:19:53.327: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:19:53.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5772-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:54.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3842" for this suite. STEP: Destroying namespace "webhook-3842-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.971 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":257,"skipped":4117,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:54.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-04488c44-bc2a-4234-a171-160b7b33223c STEP: Creating a pod to test consume configMaps Apr 10 22:19:54.884: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2d9cfc9-d1be-4cc6-8e81-411bd8fbf679" in namespace "projected-6766" to be "success or failure" Apr 10 22:19:54.892: INFO: Pod "pod-projected-configmaps-b2d9cfc9-d1be-4cc6-8e81-411bd8fbf679": Phase="Pending", Reason="", readiness=false. Elapsed: 7.261644ms Apr 10 22:19:56.900: INFO: Pod "pod-projected-configmaps-b2d9cfc9-d1be-4cc6-8e81-411bd8fbf679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0152723s Apr 10 22:19:58.904: INFO: Pod "pod-projected-configmaps-b2d9cfc9-d1be-4cc6-8e81-411bd8fbf679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019911206s STEP: Saw pod success Apr 10 22:19:58.904: INFO: Pod "pod-projected-configmaps-b2d9cfc9-d1be-4cc6-8e81-411bd8fbf679" satisfied condition "success or failure" Apr 10 22:19:58.907: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b2d9cfc9-d1be-4cc6-8e81-411bd8fbf679 container projected-configmap-volume-test: STEP: delete the pod Apr 10 22:19:58.929: INFO: Waiting for pod pod-projected-configmaps-b2d9cfc9-d1be-4cc6-8e81-411bd8fbf679 to disappear Apr 10 22:19:58.984: INFO: Pod pod-projected-configmaps-b2d9cfc9-d1be-4cc6-8e81-411bd8fbf679 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:19:58.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6766" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4134,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:19:58.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 10 22:19:59.636: INFO: Pod name wrapped-volume-race-a7282f5d-ca29-4083-9cd9-519f100b6e3f: Found 0 pods out of 5 Apr 10 22:20:04.643: INFO: Pod name wrapped-volume-race-a7282f5d-ca29-4083-9cd9-519f100b6e3f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a7282f5d-ca29-4083-9cd9-519f100b6e3f in namespace emptydir-wrapper-1403, will wait for the garbage collector to delete the pods Apr 10 22:20:18.725: INFO: Deleting ReplicationController wrapped-volume-race-a7282f5d-ca29-4083-9cd9-519f100b6e3f took: 8.22812ms Apr 10 22:20:19.125: INFO: Terminating ReplicationController wrapped-volume-race-a7282f5d-ca29-4083-9cd9-519f100b6e3f pods took: 400.403767ms STEP: Creating RC which spawns configmap-volume pods Apr 10 22:20:30.364: INFO: Pod name wrapped-volume-race-8ae98793-0216-4aa7-8c1d-7555694fda9f: Found 0 pods out of 5 Apr 10 22:20:35.370: INFO: Pod name wrapped-volume-race-8ae98793-0216-4aa7-8c1d-7555694fda9f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8ae98793-0216-4aa7-8c1d-7555694fda9f in namespace emptydir-wrapper-1403, will wait for the garbage collector to delete the pods Apr 10 22:20:49.468: INFO: Deleting ReplicationController wrapped-volume-race-8ae98793-0216-4aa7-8c1d-7555694fda9f took: 18.063998ms Apr 10 22:20:49.768: INFO: Terminating ReplicationController wrapped-volume-race-8ae98793-0216-4aa7-8c1d-7555694fda9f pods took: 300.265073ms STEP: Creating RC which spawns configmap-volume pods Apr 10 22:20:59.346: INFO: Pod name wrapped-volume-race-28fbe54c-0bf2-4e58-9a65-540e081cfd4f: Found 0 pods out of 5 Apr 10 22:21:04.372: INFO: Pod name wrapped-volume-race-28fbe54c-0bf2-4e58-9a65-540e081cfd4f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-28fbe54c-0bf2-4e58-9a65-540e081cfd4f in namespace emptydir-wrapper-1403, will wait for the garbage collector to delete the pods Apr 10 22:21:18.461: INFO: Deleting ReplicationController wrapped-volume-race-28fbe54c-0bf2-4e58-9a65-540e081cfd4f took: 14.945189ms Apr 10 22:21:18.761: INFO: Terminating ReplicationController wrapped-volume-race-28fbe54c-0bf2-4e58-9a65-540e081cfd4f pods took: 300.278643ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:21:30.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1403" for this suite. • [SLOW TEST:91.252 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":259,"skipped":4148,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:21:30.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 10 22:21:30.873: INFO: created pod pod-service-account-defaultsa Apr 10 22:21:30.873: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 10 22:21:30.880: INFO: created pod pod-service-account-mountsa Apr 10 22:21:30.880: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 10 22:21:30.885: INFO: created pod pod-service-account-nomountsa Apr 10 22:21:30.885: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 10 22:21:30.955: INFO: created pod pod-service-account-defaultsa-mountspec Apr 10 22:21:30.956: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 10 22:21:30.963: INFO: created pod pod-service-account-mountsa-mountspec Apr 10 22:21:30.963: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 10 22:21:30.969: INFO: created pod pod-service-account-nomountsa-mountspec Apr 10 22:21:30.969: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 10 22:21:31.028: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 10 22:21:31.028: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 10 22:21:31.088: INFO: created pod pod-service-account-mountsa-nomountspec Apr 10 22:21:31.088: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 10 22:21:31.130: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 10 22:21:31.130: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:21:31.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3793" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":260,"skipped":4155,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:21:31.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:21:31.289: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a" in namespace "security-context-test-4186" to be "success or failure" Apr 10 22:21:31.292: INFO: Pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.654026ms Apr 10 22:21:33.296: INFO: Pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007530293s Apr 10 22:21:35.303: INFO: Pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013965625s Apr 10 22:21:37.513: INFO: Pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223743236s Apr 10 22:21:39.542: INFO: Pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253500624s Apr 10 22:21:41.603: INFO: Pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a": Phase="Running", Reason="", readiness=true. Elapsed: 10.313934705s Apr 10 22:21:43.611: INFO: Pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.32223s Apr 10 22:21:43.611: INFO: Pod "busybox-readonly-false-75978116-c2a3-49f0-a785-4aa3fd96441a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:21:43.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4186" for this suite. • [SLOW TEST:12.427 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:21:43.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 10 22:21:43.724: INFO: Waiting up to 5m0s for pod "pod-9d3535c7-a2e6-4102-89f1-9da05534b018" in namespace "emptydir-612" to be "success or failure" Apr 10 22:21:43.781: INFO: Pod "pod-9d3535c7-a2e6-4102-89f1-9da05534b018": Phase="Pending", Reason="", readiness=false. Elapsed: 57.194266ms Apr 10 22:21:45.786: INFO: Pod "pod-9d3535c7-a2e6-4102-89f1-9da05534b018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061524734s Apr 10 22:21:47.790: INFO: Pod "pod-9d3535c7-a2e6-4102-89f1-9da05534b018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065991514s STEP: Saw pod success Apr 10 22:21:47.790: INFO: Pod "pod-9d3535c7-a2e6-4102-89f1-9da05534b018" satisfied condition "success or failure" Apr 10 22:21:47.793: INFO: Trying to get logs from node jerma-worker pod pod-9d3535c7-a2e6-4102-89f1-9da05534b018 container test-container: STEP: delete the pod Apr 10 22:21:47.828: INFO: Waiting for pod pod-9d3535c7-a2e6-4102-89f1-9da05534b018 to disappear Apr 10 22:21:47.843: INFO: Pod pod-9d3535c7-a2e6-4102-89f1-9da05534b018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:21:47.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-612" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:21:47.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:21:47.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 10 22:21:48.492: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-10T22:21:48Z generation:1 name:name1 resourceVersion:7058063 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1b5b7bcd-8b34-48e9-8935-ea3a291e172a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 10 22:21:58.498: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-10T22:21:58Z generation:1 name:name2 resourceVersion:7058111 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:76ed1b8f-66c9-4072-bc87-3b1c50006fab] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 10 22:22:08.504: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-10T22:21:48Z generation:2 name:name1 resourceVersion:7058141 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1b5b7bcd-8b34-48e9-8935-ea3a291e172a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 10 22:22:18.510: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-10T22:21:58Z generation:2 name:name2 resourceVersion:7058169 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:76ed1b8f-66c9-4072-bc87-3b1c50006fab] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 10 22:22:28.518: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-10T22:21:48Z generation:2 name:name1 resourceVersion:7058199 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1b5b7bcd-8b34-48e9-8935-ea3a291e172a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 10 22:22:38.528: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-10T22:21:58Z generation:2 name:name2 resourceVersion:7058229 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:76ed1b8f-66c9-4072-bc87-3b1c50006fab] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:22:49.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1815" for this suite. • [SLOW TEST:61.224 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":263,"skipped":4265,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:22:49.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 10 22:23:17.170: INFO: Container started at 2020-04-10 22:22:51 +0000 UTC, pod became ready at 2020-04-10 22:23:15 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:23:17.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7500" for this suite. • [SLOW TEST:28.103 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4287,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:23:17.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 10 22:23:17.237: INFO: Created pod &Pod{ObjectMeta:{dns-6780 dns-6780 /api/v1/namespaces/dns-6780/pods/dns-6780 d21c293b-11f7-4a17-aacb-e2f5ffc6a568 7058368 0 2020-04-10 22:23:17 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r9gm9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r9gm9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r9gm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 10 22:23:21.246: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6780 PodName:dns-6780 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 22:23:21.246: INFO: >>> kubeConfig: /root/.kube/config I0410 22:23:21.285083 7 log.go:172] (0xc0026a4420) (0xc001700d20) Create stream I0410 22:23:21.285284 7 log.go:172] (0xc0026a4420) (0xc001700d20) Stream added, broadcasting: 1 I0410 22:23:21.287065 7 log.go:172] (0xc0026a4420) Reply frame received for 1 I0410 22:23:21.287121 7 log.go:172] (0xc0026a4420) (0xc001e4b220) Create stream I0410 22:23:21.287139 7 log.go:172] (0xc0026a4420) (0xc001e4b220) Stream added, broadcasting: 3 I0410 22:23:21.287949 7 log.go:172] (0xc0026a4420) Reply frame received for 3 I0410 22:23:21.287983 7 log.go:172] (0xc0026a4420) (0xc001e4b400) Create stream I0410 22:23:21.287992 7 log.go:172] (0xc0026a4420) (0xc001e4b400) Stream added, broadcasting: 5 I0410 22:23:21.288911 7 log.go:172] (0xc0026a4420) Reply frame received for 5 I0410 22:23:21.387603 7 log.go:172] (0xc0026a4420) Data frame received for 3 I0410 22:23:21.387639 7 log.go:172] (0xc001e4b220) (3) Data frame handling I0410 22:23:21.387660 7 log.go:172] (0xc001e4b220) (3) Data frame sent I0410 22:23:21.388468 7 log.go:172] (0xc0026a4420) Data frame received for 3 I0410 22:23:21.388519 7 log.go:172] (0xc001e4b220) (3) Data frame handling I0410 22:23:21.388730 7 log.go:172] (0xc0026a4420) Data frame received for 5 I0410 22:23:21.388746 7 log.go:172] (0xc001e4b400) (5) Data frame handling I0410 22:23:21.390535 7 log.go:172] (0xc0026a4420) Data frame received for 1 I0410 22:23:21.390567 7 log.go:172] (0xc001700d20) (1) Data frame handling I0410 22:23:21.390595 7 log.go:172] (0xc001700d20) (1) Data frame sent I0410 22:23:21.390624 7 log.go:172] (0xc0026a4420) (0xc001700d20) Stream removed, broadcasting: 1 I0410 22:23:21.390649 7 log.go:172] (0xc0026a4420) Go away received I0410 22:23:21.390828 7 log.go:172] (0xc0026a4420) (0xc001700d20) Stream removed, broadcasting: 1 I0410 22:23:21.390862 7 log.go:172] (0xc0026a4420) (0xc001e4b220) Stream removed, broadcasting: 3 I0410 22:23:21.390884 7 log.go:172] (0xc0026a4420) (0xc001e4b400) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 10 22:23:21.390: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6780 PodName:dns-6780 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 22:23:21.390: INFO: >>> kubeConfig: /root/.kube/config I0410 22:23:21.425659 7 log.go:172] (0xc0026a4a50) (0xc001701220) Create stream I0410 22:23:21.425691 7 log.go:172] (0xc0026a4a50) (0xc001701220) Stream added, broadcasting: 1 I0410 22:23:21.428103 7 log.go:172] (0xc0026a4a50) Reply frame received for 1 I0410 22:23:21.428142 7 log.go:172] (0xc0026a4a50) (0xc000faf5e0) Create stream I0410 22:23:21.428152 7 log.go:172] (0xc0026a4a50) (0xc000faf5e0) Stream added, broadcasting: 3 I0410 22:23:21.429293 7 log.go:172] (0xc0026a4a50) Reply frame received for 3 I0410 22:23:21.429334 7 log.go:172] (0xc0026a4a50) (0xc0019d0140) Create stream I0410 22:23:21.429351 7 log.go:172] (0xc0026a4a50) (0xc0019d0140) Stream added, broadcasting: 5 I0410 22:23:21.430539 7 log.go:172] (0xc0026a4a50) Reply frame received for 5 I0410 22:23:21.509052 7 log.go:172] (0xc0026a4a50) Data frame received for 3 I0410 22:23:21.509078 7 log.go:172] (0xc000faf5e0) (3) Data frame handling I0410 22:23:21.509091 7 log.go:172] (0xc000faf5e0) (3) Data frame sent I0410 22:23:21.509705 7 log.go:172] (0xc0026a4a50) Data frame received for 3 I0410 22:23:21.509726 7 log.go:172] (0xc000faf5e0) (3) Data frame handling I0410 22:23:21.509738 7 log.go:172] (0xc0026a4a50) Data frame received for 5 I0410 22:23:21.509757 7 log.go:172] (0xc0019d0140) (5) Data frame handling I0410 22:23:21.511522 7 log.go:172] (0xc0026a4a50) Data frame received for 1 I0410 22:23:21.511534 7 log.go:172] (0xc001701220) (1) Data frame handling I0410 22:23:21.511545 7 log.go:172] (0xc001701220) (1) Data frame sent I0410 22:23:21.511552 7 log.go:172] (0xc0026a4a50) (0xc001701220) Stream removed, broadcasting: 1 I0410 22:23:21.511604 7 log.go:172] (0xc0026a4a50) (0xc001701220) Stream removed, broadcasting: 1 I0410 22:23:21.511612 7 log.go:172] (0xc0026a4a50) (0xc000faf5e0) Stream removed, broadcasting: 3 I0410 22:23:21.511673 7 log.go:172] (0xc0026a4a50) Go away received I0410 22:23:21.511698 7 log.go:172] (0xc0026a4a50) (0xc0019d0140) Stream removed, broadcasting: 5 Apr 10 22:23:21.511: INFO: Deleting pod dns-6780... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:23:21.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6780" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":265,"skipped":4303,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:23:21.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 10 22:23:29.974: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 22:23:29.994: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 22:23:31.995: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 22:23:31.999: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 22:23:33.995: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 22:23:33.999: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 22:23:35.995: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 22:23:35.999: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 22:23:37.995: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 22:23:37.999: INFO: Pod pod-with-poststart-exec-hook still exists Apr 10 22:23:39.995: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 10 22:23:39.999: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:23:39.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4299" for this suite. • [SLOW TEST:18.425 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:23:40.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 10 22:23:40.082: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix700386975/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:23:40.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3790" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":267,"skipped":4341,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:23:40.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-764 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 10 22:23:40.254: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 10 22:24:06.360: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.61:8080/dial?request=hostname&protocol=udp&host=10.244.1.221&port=8081&tries=1'] Namespace:pod-network-test-764 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 22:24:06.360: INFO: >>> kubeConfig: /root/.kube/config I0410 22:24:06.396972 7 log.go:172] (0xc004812630) (0xc001e430e0) Create stream I0410 22:24:06.396993 7 log.go:172] (0xc004812630) (0xc001e430e0) Stream added, broadcasting: 1 I0410 22:24:06.399080 7 log.go:172] (0xc004812630) Reply frame received for 1 I0410 22:24:06.399125 7 log.go:172] (0xc004812630) (0xc0027a8000) Create stream I0410 22:24:06.399143 7 log.go:172] (0xc004812630) (0xc0027a8000) Stream added, broadcasting: 3 I0410 22:24:06.400263 7 log.go:172] (0xc004812630) Reply frame received for 3 I0410 22:24:06.400302 7 log.go:172] (0xc004812630) (0xc001701360) Create stream I0410 22:24:06.400317 7 log.go:172] (0xc004812630) (0xc001701360) Stream added, broadcasting: 5 I0410 22:24:06.401648 7 log.go:172] (0xc004812630) Reply frame received for 5 I0410 22:24:06.505335 7 log.go:172] (0xc004812630) Data frame received for 3 I0410 22:24:06.505377 7 log.go:172] (0xc0027a8000) (3) Data frame handling I0410 22:24:06.505411 7 log.go:172] (0xc0027a8000) (3) Data frame sent I0410 22:24:06.506285 7 log.go:172] (0xc004812630) Data frame received for 3 I0410 22:24:06.506317 7 log.go:172] (0xc0027a8000) (3) Data frame handling I0410 22:24:06.506354 7 log.go:172] (0xc004812630) Data frame received for 5 I0410 22:24:06.506397 7 log.go:172] (0xc001701360) (5) Data frame handling I0410 22:24:06.508311 7 log.go:172] (0xc004812630) Data frame received for 1 I0410 22:24:06.508331 7 log.go:172] (0xc001e430e0) (1) Data frame handling I0410 22:24:06.508348 7 log.go:172] (0xc001e430e0) (1) Data frame sent I0410 22:24:06.508363 7 log.go:172] (0xc004812630) (0xc001e430e0) Stream removed, broadcasting: 1 I0410 22:24:06.508458 7 log.go:172] (0xc004812630) (0xc001e430e0) Stream removed, broadcasting: 1 I0410 22:24:06.508470 7 log.go:172] (0xc004812630) (0xc0027a8000) Stream removed, broadcasting: 3 I0410 22:24:06.508542 7 log.go:172] (0xc004812630) Go away received I0410 22:24:06.508575 7 log.go:172] (0xc004812630) (0xc001701360) Stream removed, broadcasting: 5 Apr 10 22:24:06.508: INFO: Waiting for responses: map[] Apr 10 22:24:06.511: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.61:8080/dial?request=hostname&protocol=udp&host=10.244.2.60&port=8081&tries=1'] Namespace:pod-network-test-764 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 10 22:24:06.512: INFO: >>> kubeConfig: /root/.kube/config I0410 22:24:06.548694 7 log.go:172] (0xc0026a4fd0) (0xc001701e00) Create stream I0410 22:24:06.548716 7 log.go:172] (0xc0026a4fd0) (0xc001701e00) Stream added, broadcasting: 1 I0410 22:24:06.550707 7 log.go:172] (0xc0026a4fd0) Reply frame received for 1 I0410 22:24:06.550746 7 log.go:172] (0xc0026a4fd0) (0xc0023103c0) Create stream I0410 22:24:06.550759 7 log.go:172] (0xc0026a4fd0) (0xc0023103c0) Stream added, broadcasting: 3 I0410 22:24:06.551793 7 log.go:172] (0xc0026a4fd0) Reply frame received for 3 I0410 22:24:06.551863 7 log.go:172] (0xc0026a4fd0) (0xc002310460) Create stream I0410 22:24:06.551887 7 log.go:172] (0xc0026a4fd0) (0xc002310460) Stream added, broadcasting: 5 I0410 22:24:06.553204 7 log.go:172] (0xc0026a4fd0) Reply frame received for 5 I0410 22:24:06.622216 7 log.go:172] (0xc0026a4fd0) Data frame received for 3 I0410 22:24:06.622250 7 log.go:172] (0xc0023103c0) (3) Data frame handling I0410 22:24:06.622282 7 log.go:172] (0xc0023103c0) (3) Data frame sent I0410 22:24:06.622644 7 log.go:172] (0xc0026a4fd0) Data frame received for 5 I0410 22:24:06.622704 7 log.go:172] (0xc002310460) (5) Data frame handling I0410 22:24:06.622736 7 log.go:172] (0xc0026a4fd0) Data frame received for 3 I0410 22:24:06.622760 7 log.go:172] (0xc0023103c0) (3) Data frame handling I0410 22:24:06.624893 7 log.go:172] (0xc0026a4fd0) Data frame received for 1 I0410 22:24:06.624923 7 log.go:172] (0xc001701e00) (1) Data frame handling I0410 22:24:06.624950 7 log.go:172] (0xc001701e00) (1) Data frame sent I0410 22:24:06.624967 7 log.go:172] (0xc0026a4fd0) (0xc001701e00) Stream removed, broadcasting: 1 I0410 22:24:06.624989 7 log.go:172] (0xc0026a4fd0) Go away received I0410 22:24:06.625087 7 log.go:172] (0xc0026a4fd0) (0xc001701e00) Stream removed, broadcasting: 1 I0410 22:24:06.625102 7 log.go:172] (0xc0026a4fd0) (0xc0023103c0) Stream removed, broadcasting: 3 I0410 22:24:06.625108 7 log.go:172] (0xc0026a4fd0) (0xc002310460) Stream removed, broadcasting: 5 Apr 10 22:24:06.625: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:24:06.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-764" for this suite. • [SLOW TEST:26.449 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:24:06.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 22:24:07.027: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 22:24:09.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154247, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154247, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154247, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154247, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 22:24:12.119: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:24:12.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1904" for this suite. STEP: Destroying namespace "webhook-1904-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.254 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":269,"skipped":4389,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:24:12.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 10 22:24:13.284: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 10 22:24:18.288: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:24:19.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5215" for this suite. • [SLOW TEST:6.439 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":270,"skipped":4411,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:24:19.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 10 22:24:20.059: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 10 22:24:22.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154260, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154260, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154260, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154260, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 10 22:24:24.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154260, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154260, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154260, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722154260, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 10 22:24:27.100: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:24:27.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3965" for this suite. STEP: Destroying namespace "webhook-3965-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.913 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":271,"skipped":4419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:24:27.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-7mpl STEP: Creating a pod to test atomic-volume-subpath Apr 10 22:24:27.372: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7mpl" in namespace "subpath-3226" to be "success or failure" Apr 10 22:24:27.376: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.753833ms Apr 10 22:24:29.380: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007988622s Apr 10 22:24:31.385: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 4.012644949s Apr 10 22:24:33.389: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 6.016981951s Apr 10 22:24:35.393: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 8.021176125s Apr 10 22:24:37.397: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 10.02498607s Apr 10 22:24:39.401: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 12.029107946s Apr 10 22:24:41.406: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 14.033721927s Apr 10 22:24:43.410: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 16.037827757s Apr 10 22:24:45.414: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 18.042276389s Apr 10 22:24:47.419: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 20.046678095s Apr 10 22:24:49.423: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Running", Reason="", readiness=true. Elapsed: 22.05118475s Apr 10 22:24:51.428: INFO: Pod "pod-subpath-test-configmap-7mpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055737566s STEP: Saw pod success Apr 10 22:24:51.428: INFO: Pod "pod-subpath-test-configmap-7mpl" satisfied condition "success or failure" Apr 10 22:24:51.432: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-7mpl container test-container-subpath-configmap-7mpl: STEP: delete the pod Apr 10 22:24:51.472: INFO: Waiting for pod pod-subpath-test-configmap-7mpl to disappear Apr 10 22:24:51.477: INFO: Pod pod-subpath-test-configmap-7mpl no longer exists STEP: Deleting pod pod-subpath-test-configmap-7mpl Apr 10 22:24:51.477: INFO: Deleting pod "pod-subpath-test-configmap-7mpl" in namespace "subpath-3226" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:24:51.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3226" for this suite. • [SLOW TEST:24.245 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":272,"skipped":4447,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:24:51.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f17e0021-af79-4a72-a38a-de2b5c6c1618 STEP: Creating a pod to test consume configMaps Apr 10 22:24:51.561: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b418f3d8-91d8-4e75-b042-dbb7d9601fad" in namespace "projected-9747" to be "success or failure" Apr 10 22:24:51.599: INFO: Pod "pod-projected-configmaps-b418f3d8-91d8-4e75-b042-dbb7d9601fad": Phase="Pending", Reason="", readiness=false. Elapsed: 37.906393ms Apr 10 22:24:53.603: INFO: Pod "pod-projected-configmaps-b418f3d8-91d8-4e75-b042-dbb7d9601fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042088137s Apr 10 22:24:55.607: INFO: Pod "pod-projected-configmaps-b418f3d8-91d8-4e75-b042-dbb7d9601fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045845768s STEP: Saw pod success Apr 10 22:24:55.607: INFO: Pod "pod-projected-configmaps-b418f3d8-91d8-4e75-b042-dbb7d9601fad" satisfied condition "success or failure" Apr 10 22:24:55.610: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b418f3d8-91d8-4e75-b042-dbb7d9601fad container projected-configmap-volume-test: STEP: delete the pod Apr 10 22:24:55.640: INFO: Waiting for pod pod-projected-configmaps-b418f3d8-91d8-4e75-b042-dbb7d9601fad to disappear Apr 10 22:24:55.652: INFO: Pod pod-projected-configmaps-b418f3d8-91d8-4e75-b042-dbb7d9601fad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:24:55.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9747" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4449,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:24:55.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Apr 10 22:24:55.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5486 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 10 22:24:55.805: INFO: stderr: "" Apr 10 22:24:55.806: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 10 22:24:55.806: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 10 22:24:55.806: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5486" to be "running and ready, or succeeded" Apr 10 22:24:55.874: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 68.742555ms Apr 10 22:24:57.952: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146663168s Apr 10 22:24:59.956: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.150706437s Apr 10 22:24:59.956: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 10 22:24:59.956: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 10 22:24:59.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5486' Apr 10 22:25:00.086: INFO: stderr: "" Apr 10 22:25:00.086: INFO: stdout: "I0410 22:24:58.057765 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/7pr4 217\nI0410 22:24:58.258030 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/vv4 419\nI0410 22:24:58.457896 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/rnjs 481\nI0410 22:24:58.657919 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/8hgt 299\nI0410 22:24:58.857877 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/lhpl 273\nI0410 22:24:59.057939 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/sh4g 201\nI0410 22:24:59.257952 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/b95 421\nI0410 22:24:59.457964 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/798 523\nI0410 22:24:59.657923 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/fd6 244\nI0410 22:24:59.857949 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/lr8 266\nI0410 22:25:00.057907 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/j94 249\n" STEP: limiting log lines Apr 10 22:25:00.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5486 --tail=1' Apr 10 22:25:00.194: INFO: stderr: "" Apr 10 22:25:00.194: INFO: stdout: "I0410 22:25:00.057907 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/j94 249\n" Apr 10 22:25:00.194: INFO: got output "I0410 22:25:00.057907 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/j94 249\n" STEP: limiting log bytes Apr 10 22:25:00.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5486 --limit-bytes=1' Apr 10 22:25:00.293: INFO: stderr: "" Apr 10 22:25:00.293: INFO: stdout: "I" Apr 10 22:25:00.293: INFO: got output "I" STEP: exposing timestamps Apr 10 22:25:00.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5486 --tail=1 --timestamps' Apr 10 22:25:00.398: INFO: stderr: "" Apr 10 22:25:00.398: INFO: stdout: "2020-04-10T22:25:00.258297406Z I0410 22:25:00.258088 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/sr7q 267\n" Apr 10 22:25:00.398: INFO: got output "2020-04-10T22:25:00.258297406Z I0410 22:25:00.258088 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/sr7q 267\n" STEP: restricting to a time range Apr 10 22:25:02.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5486 --since=1s' Apr 10 22:25:03.016: INFO: stderr: "" Apr 10 22:25:03.016: INFO: stdout: "I0410 22:25:02.058023 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/fwsh 542\nI0410 22:25:02.257948 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/lcl 584\nI0410 22:25:02.457906 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/nfh 585\nI0410 22:25:02.657920 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/t2cq 283\nI0410 22:25:02.857927 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/mm4f 400\n" Apr 10 22:25:03.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5486 --since=24h' Apr 10 22:25:03.128: INFO: stderr: "" Apr 10 22:25:03.128: INFO: stdout: "I0410 22:24:58.057765 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/7pr4 217\nI0410 22:24:58.258030 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/vv4 419\nI0410 22:24:58.457896 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/rnjs 481\nI0410 22:24:58.657919 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/8hgt 299\nI0410 22:24:58.857877 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/lhpl 273\nI0410 22:24:59.057939 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/sh4g 201\nI0410 22:24:59.257952 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/b95 421\nI0410 22:24:59.457964 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/798 523\nI0410 22:24:59.657923 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/fd6 244\nI0410 22:24:59.857949 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/lr8 266\nI0410 22:25:00.057907 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/j94 249\nI0410 22:25:00.258088 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/sr7q 267\nI0410 22:25:00.457966 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/lqnz 304\nI0410 22:25:00.657928 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/vjg 275\nI0410 22:25:00.857908 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/8xd 247\nI0410 22:25:01.057929 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/s7x4 340\nI0410 22:25:01.257897 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/99pd 254\nI0410 22:25:01.458025 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/tbt 520\nI0410 22:25:01.657922 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/k8p 422\nI0410 22:25:01.857937 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/x5f 411\nI0410 22:25:02.058023 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/fwsh 542\nI0410 22:25:02.257948 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/lcl 584\nI0410 22:25:02.457906 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/nfh 585\nI0410 22:25:02.657920 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/t2cq 283\nI0410 22:25:02.857927 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/mm4f 400\nI0410 22:25:03.057930 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/t9k 468\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Apr 10 22:25:03.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5486' Apr 10 22:25:09.497: INFO: stderr: "" Apr 10 22:25:09.497: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:25:09.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5486" for this suite. • [SLOW TEST:13.844 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":274,"skipped":4450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:25:09.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-bf9e39c1-4d84-4e87-aa15-319f2a4637fe STEP: Creating a pod to test consume secrets Apr 10 22:25:09.684: INFO: Waiting up to 5m0s for pod "pod-secrets-c917d27a-455c-423e-aaca-d09c78519dcc" in namespace "secrets-4016" to be "success or failure" Apr 10 22:25:09.688: INFO: Pod "pod-secrets-c917d27a-455c-423e-aaca-d09c78519dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505314ms Apr 10 22:25:11.693: INFO: Pod "pod-secrets-c917d27a-455c-423e-aaca-d09c78519dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009114525s Apr 10 22:25:13.697: INFO: Pod "pod-secrets-c917d27a-455c-423e-aaca-d09c78519dcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012967128s STEP: Saw pod success Apr 10 22:25:13.697: INFO: Pod "pod-secrets-c917d27a-455c-423e-aaca-d09c78519dcc" satisfied condition "success or failure" Apr 10 22:25:13.700: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c917d27a-455c-423e-aaca-d09c78519dcc container secret-volume-test: STEP: delete the pod Apr 10 22:25:13.718: INFO: Waiting for pod pod-secrets-c917d27a-455c-423e-aaca-d09c78519dcc to disappear Apr 10 22:25:13.723: INFO: Pod pod-secrets-c917d27a-455c-423e-aaca-d09c78519dcc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:25:13.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4016" for this suite. STEP: Destroying namespace "secret-namespace-4874" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4482,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:25:13.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3118.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3118.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3118.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3118.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 22:25:19.844: INFO: DNS probes using dns-test-33f4d580-869d-40dd-a359-2215f6fb07b9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3118.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3118.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3118.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3118.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 22:25:25.960: INFO: File wheezy_udp@dns-test-service-3.dns-3118.svc.cluster.local from pod dns-3118/dns-test-4b6998b9-46dd-4bfd-8784-74442d4d98f6 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 22:25:25.977: INFO: Lookups using dns-3118/dns-test-4b6998b9-46dd-4bfd-8784-74442d4d98f6 failed for: [wheezy_udp@dns-test-service-3.dns-3118.svc.cluster.local] Apr 10 22:25:30.982: INFO: File wheezy_udp@dns-test-service-3.dns-3118.svc.cluster.local from pod dns-3118/dns-test-4b6998b9-46dd-4bfd-8784-74442d4d98f6 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 22:25:30.986: INFO: File jessie_udp@dns-test-service-3.dns-3118.svc.cluster.local from pod dns-3118/dns-test-4b6998b9-46dd-4bfd-8784-74442d4d98f6 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 10 22:25:30.986: INFO: Lookups using dns-3118/dns-test-4b6998b9-46dd-4bfd-8784-74442d4d98f6 failed for: [wheezy_udp@dns-test-service-3.dns-3118.svc.cluster.local jessie_udp@dns-test-service-3.dns-3118.svc.cluster.local] Apr 10 22:25:35.985: INFO: DNS probes using dns-test-4b6998b9-46dd-4bfd-8784-74442d4d98f6 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3118.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3118.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3118.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3118.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 10 22:25:42.523: INFO: DNS probes using dns-test-a687594d-69db-4e68-ab79-9c2b15a607e4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:25:42.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3118" for this suite. • [SLOW TEST:28.898 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":276,"skipped":4494,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:25:42.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 10 22:25:42.683: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 10 22:25:42.774: INFO: Waiting for terminating namespaces to be deleted... Apr 10 22:25:42.778: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 10 22:25:42.784: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 22:25:42.784: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 22:25:42.784: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 22:25:42.784: INFO: Container kube-proxy ready: true, restart count 0 Apr 10 22:25:42.784: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 10 22:25:42.791: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 10 22:25:42.791: INFO: Container kube-hunter ready: false, restart count 0 Apr 10 22:25:42.791: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 22:25:42.791: INFO: Container kindnet-cni ready: true, restart count 0 Apr 10 22:25:42.791: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 10 22:25:42.791: INFO: Container kube-bench ready: false, restart count 0 Apr 10 22:25:42.791: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 10 22:25:42.791: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-07b1d0a7-7251-4e02-9240-01f97e5d14b4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-07b1d0a7-7251-4e02-9240-01f97e5d14b4 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-07b1d0a7-7251-4e02-9240-01f97e5d14b4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:25:51.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6385" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.456 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":277,"skipped":4502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 10 22:25:51.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 10 22:26:02.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7317" for this suite. • [SLOW TEST:11.142 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":278,"skipped":4547,"failed":0} SSSSSSSSSSSSSSSSSApr 10 22:26:02.234: INFO: Running AfterSuite actions on all nodes Apr 10 22:26:02.234: INFO: Running AfterSuite actions on node 1 Apr 10 22:26:02.234: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4751.374 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS